text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Association of Methyl Donor Nutrients’ Dietary Intake and Cognitive Impairment in the Elderly Based on the Intestinal Microbiome
Globally, cognitive impairment (CI) is the leading cause of disability and dependency among the elderly, presenting a significant public health concern. However, there is currently a deficiency in pharmacological interventions that can effectively cure or significantly reverse the progression of cognitive impairment. Methyl donor nutrients (MDNs), including folic acid, choline, and vitamin B12, have been identified as potential enhancers of cognitive function. Nevertheless, there remains a dearth of comprehensive research investigating the connection between the dietary intake of MDNs and CI. In our study, we comprehensively assessed the relationship between MDNs’ dietary intake and CI in older adults, utilizing 16S rRNA gene sequencing to investigate the potential underlying mechanisms. The results showed an obvious difference in the methyl-donor nutritional quality index (MNQI) between the dementia (D) group and the dementia-free (DF) group. Specifically, there was a lower MNQI in the D group than that in the DF group. For the gut microbiome, the beta diversity of gut flora exhibited higher levels in the high methyl-donor nutritional quality (HQ) group as opposed to the low methyl-donor nutritional quality (LQ) group, and lower levels in the D group in comparison to the DF group. Subsequently, we performed a correlation analysis to examine the relationship between the relative abundance of microbiota, the intake of MDNs, and Montreal Cognitive Assessment (MoCA) scores, ultimately identifying ten genera with potential regulatory functions. Additionally, KEGG pathway analyses suggested that the one-carbon metabolism, chronic inflammation, and DNA synthesis potentially serve as pathways through which MDNs may be promising for influencing cognitive function. These results implied that MDNs might have the potential to enhance cognitive function through the regulation of microbiota homeostasis. This study offers dietary recommendations for the prevention and management of CI in the elderly.
Introduction
It is expected that dementia prevalence will rise as the global population ages, with projections indicating a tripling in the number of individuals living with this condition by the year 2050 from the estimated 47 million in 2015 [1].Cognitive impairment (CI) includes both mild cognitive impairment (MCI) and dementia, the latter characterized by a more pronounced deterioration in cognitive abilities that hinders daily functioning and social engagement [2].Long-term caring for individuals with CI is a significant burden on patients, their families, and society.The estimated global cost of dementia in 2015 was USD 818 billion, a figure that is expected to rise as the prevalence of dementia increases [3].This underscores the growing urgency of dementia as a significant global public health concern.
Nutrients 2024, 16, 2061 2 of 16 Historically, dementia has been regarded as an incurable disease.However, emerging evidence suggests that a significant proportion of dementia cases may be preventable [4].MCI represents a transitional stage between normal cognitive function and dementia, often serving as an early indicator of the latter [5].Consequently, early intervention has emerged as a promising approach for the prevention and management of dementia.Several studies have provided evidence suggesting that dietary patterns may confer protective benefits against cognitive decline, particularly in individuals with MCI who undergo intervention [6].Furthermore, lower serum folate levels [7,8] and higher homocysteine levels [9,10] have been linked to an increased risk of conversion from any type of MCI to all-cause dementia, a relationship that is associated with one-carbon metabolism.
One-carbon metabolism (OCM) has been identified as playing a significant role in the initiation and progression of CI [11,12].OCM is a complex system of biochemical reactions that supply methyl donors with diverse biosynthetic pathways, supporting various physiological functions in the brain such as DNA synthesis, neurotransmitter synthesis, epigenetic regulation, and antioxidant defense mechanisms.Additionally, OCM serves as a pivotal point connecting multiple pathways [13].The consumption of methyl donor nutrients (MDNs) such as protein, folate, choline, betaine, vitamin B2 (VB 2 ), VB 6 , VB 12 , and zinc plays a crucial role in OCM [14,15].Inadequacies in any of these nutrients can interfere with the intricate regulatory system responsible for maintaining the OCM, resulting in impairments in brain function [16].Recent studies have consistently reported the potential therapeutic effects of MDNs in CI.B vitamins, particularly folate, VB 6 , and VB 12 , play a role in homocysteine metabolism and can decrease the risk of CI [17,18].Additionally, choline supplementation has been shown to elevate brain acetylcholine levels and suppress neuroinflammation, leading to enhancements in learning and memory [19].Betaine has been found to inhibit the overactivation of hippocampal microglia and decrease oxidative stress, thereby serving as a preventive measure against CI [20].A deep understanding of the correlation between MDNs and CI may provide valuable insights into effective dietary strategies for preventing and managing dementia.Nevertheless, there is a lack of research that has thoroughly investigated the relationship between MDNs and CI.
The gut microbiota, a crucial element of the gut-brain axis, play a significant role in modulating cognitive functions through various mechanisms such as modulating neurotransmitter synthesis and metabolism, as well as regulating inflammatory and immune responses [21].Recent research has increasingly demonstrated a correlation between gut microbial imbalance and the onset of CI [22].While a definitive standard for gut microbiota homeostasis remains elusive, individuals with chronic illnesses often exhibit an elevated presence of pro-inflammatory bacteria and a diminished presence of beneficial bacteria.Patients with Alzheimer's disease (AD), for example, commonly display dysbiosis in their gut microbiota, characterized by the reduced abundance and diversity of flora, heightened levels of pro-inflammatory bacteria, and an elevated Firmicutes/Bacteroidetes ratio [21,23,24].Prior research has indicated that diets deficient in folate and VB 12 may lead to a reduction in B vitamin-producing bacteria and an increase in inflammation-associated bacteria [25].Thus, we posited that integrated MDNs may exert an influence on the gut microbiota, thereby potentially impacting CI.
To illustrate the relationship between dietary MDNs and CI in the elderly, and if dietary MDNs may affect CI, we systematically assessed the association between the dietary intake of MDNs and CI in the elderly, and explored the potential mechanisms based on the intestinal microbiota.This study has the potential to propose intervention strategies aimed at preventing CI in the elderly through a nutritional lens, as well as offering novel insights for the targeted manipulation of the intestinal microbiota.
Study Population
A cohort of 301 older adults aged 60-70 years was enrolled in the TALENTs trial (Targeting Aging and Longevity with Exogenous Nucleotides), a pragmatic four-month prospective single-center randomized controlled trial.Research protocols have been reported [26].Participants in the study were administered the Montreal Cognitive Assessment (MoCA) to assess cognitive health, with a MoCA score of 18 or lower indicating membership in the dementia (D) group, while those scoring above 18 were categorized as belonging to the dementia-free (DF) group [27].Demographic characteristics and medical history pertaining to chronic diseases were obtained through the administration of a questionnaire.Subsequently, fecal samples from all participants were collected, processed, and preserved at a temperature of −80 • C for future analysis.A total of 290 valid fecal samples were acquired, corresponding to the inclusion of 290 individuals in the ultimate analysis.All participants provided written consent by signing an informed consent form.The TALENTs trial has been approved by Peking University's Biomedical Ethics Committee (IRB00001052-21114) and registered by Clincal Trials.gov(NCT05243018).
Assessment of Methyl Donor Nutrients Dietary Intake
The Photo-Assisted Dietary Intake Assessment (PAD) method was utilized to gather dietary intake data for MDNs.Previous research has demonstrated the accuracy and feasibility of the PAD method, particularly in its application to elderly populations [28].Participants underwent standardized training and received real-time professional guidance.Subsequently, photographs of participants' meals were taken before and after consumption over a period of three consecutive days to assess intake of both regular and additional foods.Subsequently, we determined the mean daily consumption levels of protein, folate, choline, riboflavin, VB 6 , VB 12 , betaine, and zinc.Subsequently, we calculated the methyldonor nutritional quality index (MNQI) by considering the intake of these eight MDNs in accordance with the Chinese Dietary Reference Intakes (version 2023), in order to comprehensively evaluate the dietary intake of MDNs [29].The methodology utilized in this study was derived from a prior research investigation that demonstrated a notable association between the MNQI and OCM, thus validating the MNQI as a reliable instrument for assessing the nutritional quality of dietary MDNs sources in a comprehensive manner.As per the findings of the study, an MNQI score equal to or exceeding 6 is classified as high methyl-donor nutritional quality (HQ), while a score below 6 indicates low methyl-donor nutritional quality (LQ) [29].
16S rRNA Gene Sequencing Analysis
We extracted fecal microbial DNA using the MGIEasy fecal genomic DNA (meta) extraction kit (BGI, Shenzhen, China).Subsequently, we used the primers PF5 ′ -ACTCCTACG GGGAGGCAGCAG-3 ′ and PR5 ′ -GGACTACNNGGGGTATCTAAT-3 ′ on the polymerase chain reaction (PCR) amplification of the bacterial V3-V4 highly variable region of the 16SrRNA gene.The PCR amplification products were purified and dissolved in Elution Buffer using Agencourt AMPure XP magnetic beads (Beckman Coulter, UK) and labeled to complete the library construction.The fragment range and concentration of the libraries were detected using an Agilent 2100 Bioanalyzer (Agilent, CA, USA).The qualified libraries were sequenced on the MGISEQ-2000 platform (BGI, Shenzhen, China) according to the size of the inserted fragments and the paired ends were sequenced.The raw data underwent filtering and splicing through the utilization of FLASH (Fast Length Adjustment of Short reads, v1.2.11), which facilitated the assembly of paired reads derived from double-end sequencing into a unified sequence based on overlap relationships to acquire Tags within the high-variance region.Subsequently, USEARCH (v7.0.1090) was employed to amalgamate the spliced Tags, which were then clustered at a 97% sequence similarity threshold to produce Operational Taxonomic Units (OTUs), with chimeric sequences being eliminated through the application of UCHIME (v4.2.40).After obtaining the OTU representative sequences, the OTU representative sequences were compared with the Ribosomal Database Project (RDP, http://rdp.cme.msu.edu/,accessed on 20 April 2024) for species annotation by RDP classifier (v2.2) software, with the confidence threshold set to 0.6.
Bioinformatics Analysis
In order to study species diversity within one sample, Alpha Diversity analysis was performed, and The Alpha Diversity Index at the OTU level was calculated using mothur software (v1.31.2), including the Chao1 index, ACE index, and Shannon index.An analysis of the beta diversity was performed to compare the diversity of species between the samples, and the 16S rRNA gene beta diversity was analyzed using QIIME (v1.80) software.A predictive functional analysis of microbiota was performed using PICRUSt2 (v2.3.0-b)software to relate species to their functions.
Intergroup Venn diagrams were plotted to illustrate the overlap of OTUs between groups using the VennDiagram package in R (v3.4.1).The beta diversity was expressed as unweighted and weighted UniFrac distances and tested for intergroup differences using ANOSIM with the QIIME (v1.80) software.In addition, Partial Least Squares Discriminant Analysis (PLSDA) was performed using the mixOmics package to assess differences in the beta diversity.The pheatmap package facilitated the correlation of heatmap clusters while the Wilcoxon rank sum test was also utilized to determine differences in microbial abundance among taxa.Differences in microbial abundance were used to identify important species or functions.
Statistical Analysis
The values of the numerical variables are given as the mean + standard deviation, and categorical variables are given as percentages.Nonparametric tests (including Wilcoxon rank-sum tests and Mann-Whitney U tests), independent sample t-tests, and Pearson's chi-square tests were conducted to assess the differences between groups.Correlations between the numerical variables were analyzed using Spearman's correlation analysis.We performed all statistical analyses using R (v3.4.1).Statistical significance was defined as p less than 0.05.
Demographic Data for D Group and DF Group
The distribution of the general demographic characteristics for the D and DF groups was examined and no significant differences were found in terms of gender, age, education level, living alone or not, or body mass index (BMI), and the history of chronic diseases between the two groups is shown in Table 1.The differences in gender, highest educational level, living alone or not, BMI, and chronic disease history between groups were analyzed by Pearson's chi-square test.The differences in age were examined by an independent samples t-test.
Correlation between Dietary Intake of MDNs and Cognitive Function
The distribution of the MNQI was examined in both the D and DF groups.Table 2 illustrated the disparities in the median dietary intake of MNQI between the D and DF groups.Compared to the DF group, the D group scored significantly lower on the MNQI (p < 0.05), with notable differences observed in the protein, choline, riboflavin, and zinc intake compared to the DF group (p < 0.05).Furthermore, there was a noticeable decrease in the intake of folate, VB 6 , VB 12 , and betaine among participants in the D group.This suggests a potential correlation between the intake of methyl-donor nutrients (MDNs) and cognitive function, with a higher MDN intake potentially benefitting cognitive function.To determine whether dietary MDNs affect cognitive function by the intestinal flora, a comparison of the microbial composition of groups D and DF was conducted.The findings revealed a total of 1652 operational taxonomic units (OTUs) in the D group and 1968 OTUs in the DF group, with 1596 OTUs shared between both groups (Figure 1A).For the phylum level, the composition of the intestinal flora in both groups included 28 phyla, with Firmicutes, Bacteroidetes, Proteobacteria, Actinobacteria, Fusobacteria, and Verrucomicrobia being the dominant phyla.However, there were variations in the distribution of these phyla between the two groups.In comparison with the DF group, the D group showed a higher ratio of Firmicutes to Bacteroidetes (Figure 1B).
Furthermore, an evaluation was conducted to compare the gut flora diversity between the two groups.The alpha diversity of the flora was assessed utilizing Chao1, ACE, and Shannon indices.D and DF groups showed no statistically significant variances in alpha diversity (p > 0.05, Figure 2A-C).The beta diversity index was utilized to assess the variation in species abundance distribution between the groups.The findings indicated a notable divergence in the composition of intestinal flora between the D and DF groups (Figure 2E,F), with the D group exhibiting lower beta diversity compared to the DF group (p < 0.05, Figure 2D,E).Therefore, there is a notable discrepancy in the abundance distribution of intestinal flora between the two groups.Specifically, patients with dementia exhibit decreased diversity in their intestinal flora, meaning specific alterations in the distribution of abundance of certain flora.
(p < 0.05, Figure 2D,E).Therefore, there is a notable discrepancy in the abundance distribution of intestinal flora between the two groups.Specifically, patients with dementia exhibit decreased diversity in their intestinal flora, meaning specific alterations in the distribution of abundance of certain flora.(p < 0.05, Figure 2D,E).Therefore, there is a notable discrepancy in the abundance distribution of intestinal flora between the two groups.Specifically, patients with dementia exhibit decreased diversity in their intestinal flora, meaning specific alterations in the distribution of abundance of certain flora.
Relationship between MDNs' Intake and Intestinal Flora
In this study, we employed beta diversity indices to assess the impact of dietary MDNs on the intestinal flora composition.The results demonstrated a significant difference in the distribution of intestinal flora abundance between the LQ and HQ groups, with the LQ group exhibiting lower beta diversity compared to the HQ group (p < 0.001, Figure 3A).Furthermore, the findings indicate that altering the abundance distribution of gut flora may have a close relationship to cognitive function, particularly given the lower beta diversity observed in the D group.
Relationship between MDNs' Intake and Intestinal Flora
In this study, we employed beta diversity indices to assess the impact of dietary MDNs on the intestinal flora composition.The results demonstrated a significant difference in the distribution of intestinal flora abundance between the LQ and HQ groups, with the LQ group exhibiting lower beta diversity compared to the HQ group (p < 0.001, Figure 3A).Furthermore, the findings indicate that altering the abundance distribution of gut flora may have a close relationship to cognitive function, particularly given the lower beta diversity observed in the D group.Subsequently, in order to investigate the relationship between the specific gut microbiota and MDNs, the relative abundance of gut microbiota was analyzed at the genus level.We then conducted a preliminary screening to identify gut microbiota influenced by micronutrient-dense nutrients, based on the presence of statistically significant differences in the correlation between the genera and MNQI or MDNs' intake.Given the necessity of incorporating a greater number of potential flora in the screening phase to facilitate a broader selection for subsequent comprehensive investigations, it was determined that a statistical divergence in flora screening occurred at a significance level of p < 0.1.Figure 4 illustrates the initial screening of 115 candidate genera, which were subsequently categorized into two clusters based on their correlation with MDNs' intake.Specifically, 85 genera exhibited a positive correlation with MDNs' intake, while 30 genera showed a negative correlation.Among these genera, Gp16, Sphingomonas, Gp6, and Allobaculum exhibited significant positive correlations with the intake of MDNs, particularly demonstrating strong associations with the MNQI and folate intake.Conversely, genera such as Clostridium_XI and Maribacter displayed notable negative correlations with the MNQI.Furthermore, a consistent trend was observed where the MNQI and MDNs' intake influenced the relative abundance of these genera, either up-regulating or down-regulating their presence.
Screening for Specific Genera That May Mediate the Association of MDNs with Cognitive Function
In order to investigate potential genera that may play a role in the relationship between MDNs and dementia, we conducted an analysis of the correlation between MoCA scores and the relative abundance of the 115 genera initially identified.It was determined that a statistical divergence at a significance level of p less than 0.1 would determine statistical differences during the screening phase for the aforementioned reasons.Table 3 displayed the screening results of 10 potential genera, of which 7 were identified as having Subsequently, in order to investigate the relationship between the specific gut microbiota and MDNs, the relative abundance of gut microbiota was analyzed at the genus level.We then conducted a preliminary screening to identify gut microbiota influenced by micronutrient-dense nutrients, based on the presence of statistically significant differences in the correlation between the genera and MNQI or MDNs' intake.Given the necessity of incorporating a greater number of potential flora in the screening phase to facilitate a broader selection for subsequent comprehensive investigations, it was determined that a statistical divergence in flora screening occurred at a significance level of p < 0.1.Figure 4 illustrates the initial screening of 115 candidate genera, which were subsequently categorized into two clusters based on their correlation with MDNs' intake.Specifically, 85 genera exhibited a positive correlation with MDNs' intake, while 30 genera showed a negative correlation.Among these genera, Gp16, Sphingomonas, Gp6, and Allobaculum exhibited significant positive correlations with the intake of MDNs, particularly demonstrating strong associations with the MNQI and folate intake.Conversely, genera such as Clostridium_XI and Maribacter displayed notable negative correlations with the MNQI.Furthermore, a consistent trend was observed where the MNQI and MDNs' intake influenced the relative abundance of these genera, either up-regulating or down-regulating their presence.
Screening for Specific Genera That May Mediate the Association of MDNs with Cognitive Function
In order to investigate potential genera that may play a role in the relationship between MDNs and dementia, we conducted an analysis of the correlation between MoCA scores and the relative abundance of the 115 genera initially identified.It was determined that a statistical divergence at a significance level of p less than 0.1 would determine statistical differences during the screening phase for the aforementioned reasons.Table 3 displayed the screening results of 10 potential genera, of which 7 were identified as having cognitive benefits.These genera, including Victivallis, Turicibacter, Phascolarctobacterium, Snodgrassella, Terrimonas, Planomicrobium, and Centipeda, exhibited a positive correlation in relative abundance with both MoCA scores and the intake of MDNs.Additionally, MoCA scores and the intake of MDNs were negatively correlated with the relative abundance of three cognitively harmful genera (Porphyromonas, Peptoniphilus, and Howardella).These findings imply that MDNs may have a significant impact on cognition through these genera.Thus, the study implied that dietary MDNs may improve cognitive function to some extent.cognitive benefits.These genera, including Victivallis, Turicibacter, Phascolarctobacterium, Snodgrassella, Terrimonas, Planomicrobium, and Centipeda, exhibited a positive correlation in relative abundance with both MoCA scores and the intake of MDNs.Additionally, MoCA scores and the intake of MDNs were negatively correlated with the relative abundance of three cognitively harmful genera (Porphyromonas, Peptoniphilus, and Howardella).These findings imply that MDNs may have a significant impact on cognition through these genera.Thus, the study implied that dietary MDNs may improve cognitive function to some extent.Finally, we performed PICRUSt functional prediction analysis on the obtained 16S rRNA sequences to determine the enrichment of functional genes in the KEGG pathway, in order to speculate the possible mechanisms by which the screened intestinal flora play a role in mediating the effects of MDNs on dementia.Due to the limited sample size, p < 0.1 was considered statistically different.The findings indicated a significantly higher relative abundance of genes associated with 17 pathways, such as Sulfur metabolism, Phenylalanine metabolism, Biotin metabolism, beta-Alanine metabolism, Glyoxylate and dicarboxylate metabolism, and Glutathione metabolism, in the D group compared to the DF group.The pathways that were predominantly enriched among the up-regulated genes included Biotin metabolism, Folate biosynthesis, Selenocompound metabolism, Lipopolysaccharide biosynthesis, and Sulfur metabolism.In comparison to the DF group, the D group exhibited a significantly lower relative abundance of genes involved in 21 pathways, such as Peptidoglycan biosynthesis, RNA polymerase, Lysine biosynthesis, Ribosome, Aminoacyl-tRNA biosynthesis, D-Glutamine and D-glutamate metabolism, Fatty acid biosynthesis, and Nucleotide excision repair.Among the down-regulated pathways, Valine, leucine and isoleucine biosynthesis, D-Glutamine and D-glutamate metabolism, Peptidoglycan biosynthesis, Mismatch repair, and Lysine biosynthesis were mainly enriched (Figure 5).Finally, we performed PICRUSt functional prediction analysis on the obtained 16S rRNA sequences to determine the enrichment of functional genes in the KEGG pathway, in order to speculate the possible mechanisms by which the screened intestinal flora play a role in mediating the effects of MDNs on dementia.Due to the limited sample size, p < 0.1 was considered statistically different.The findings indicated a significantly higher relative abundance of genes associated with 17 pathways, such as Sulfur metabolism, Phenylalanine metabolism, Biotin metabolism, beta-Alanine metabolism, Glyoxylate and dicarboxylate metabolism, and Glutathione metabolism, in the D group compared to the DF group.The pathways that were predominantly enriched among the up-regulated genes included Biotin metabolism, Folate biosynthesis, Selenocompound metabolism, Lipopolysaccharide biosynthesis, and Sulfur metabolism.In comparison to the DF group, the D group exhibited a significantly lower relative abundance of genes involved in 21 pathways, such as Peptidoglycan biosynthesis, RNA polymerase, Lysine biosynthesis, Ribosome, Aminoacyl-tRNA biosynthesis, D-Glutamine and D-glutamate metabolism, Fatty acid biosynthesis, and Nucleotide excision repair.Among the down-regulated pathways, Valine, leucine and isoleucine biosynthesis, D-Glutamine and D-glutamate metabolism, Peptidoglycan biosynthesis, Mismatch repair, and Lysine biosynthesis were mainly enriched (Figure 5).
Discussion
OCM plays a crucial role in preserving brain health.The inadequate consumption of dietary folate and VB 12 has been linked to disturbances in OCM, leading to the dysregulation of the methylation pathway, increased oxidative stress, and enhanced protein deposition, ultimately hastening the development and advancement of CI [11].Through interactions within the OCM pathway, methyl donors regulate metabolism, immune responses, and epigenetic events in animals [30].Consequently, it is imperative to thoroughly investigate the association between the overall intake of MDNs and health outcomes.This study represents the first comprehensive assessment of the relationship between the MDNs' intake and cognitive function.The findings revealed a significant correlation between cognitive function and dietary MDNs in older adults, suggesting that the intake of these nutrients may have a positive impact on the cognitive health status.
There is growing evidence linking imbalances in gut microbiota to the onset of neurodegenerative diseases [22], including AD [31], Parkinson's disease [32], and multiple sclerosis [33].Individuals with dementia have been observed to display dysbiosis in their gut microbiota, characterized by diminished abundance and diversity of flora [34].Compared with those without dementia, individuals with dementia had significantly less gut flora beta diversity, with distinct differences in the flora composition and abundance between the two groups, consistent with previous research [35].Additionally, dietary factors can influence the composition and function of gut microbiota, impacting the metabolic, immune, and neurological functions of the host through metabolically active substances that may contribute to the etiology and pathogenesis of CI [36][37][38].Moreover, it suggested that diets enriched with MDNs may alter the intestinal environment and change the composition and distribution of the intestinal flora.Our research revealed a notable disparity in the beta diversity of intestinal flora between individuals with a low versus high MDN intake.Consequently, our findings propose that individuals with CI may exhibit distinct changes in their gut microbiota, and that dietary MDNs may influence cognitive function through the modulation of host gut-brain axis signaling [39].
Subsequently, utilizing correlation analysis, we identified particular genera that exhibited a significant association with both the intake of MDNs and the MoCA scores.Our findings suggest that heightened levels of Victivallis, Turicibacter, Phascolarctobacterium, Snodgrassella, Terrimonas, Planomicrobium, and Centipeda may potentially enhance cognitive function, and that the consumption of substantial quantities of MDNs could potentially elevate their prevalence.On the other hand, the elevated presence of Porphyromonas, Peptoniphilus, and Howardella has been linked to impaired cognitive function, whereas the consumption of substantial quantities of MDNs may reduce their prevalence, consequently enhancing cognitive well-being.Research on the impact of microbes on host health is providing insight into the roles of specific bacterial genera.For instance, a study examining Victivallis, a bacterium believed to have cognitive benefits, discovered that older adults with preclinical AD had lower levels of Victivallis in their gut microbiota compared to those without the disease, particularly in individuals testing positive for amyloid-β (Aβ), a pathological marker of AD [40].A subsequent cross-sectional study involving individuals aged 70 and older revealed a decrease in Victivallis relative abundance among those with MCI [41].The findings of this study align with previous research indicating a potential association between Turicibacter, known for its role in heightened bile acid production, and the disruption of bile acid homeostasis in the pathogenesis of AD, suggesting a possible neuroprotective effect of Turicibacter [42,43].Studies have indicated that the screen-detected pathogenic bacterium Porphyromonas has the ability to colonize the brain and induce neuroinflammation, resulting in elevated levels of Aβ1-42, a protein associated with dementia, particularly in cases of chronic periodontitis [44,45].Furthermore, Peptoniphilus, an opportunistic pathogen, is frequently identified as the principal causative agent of inflammation resulting from diverse infections, notably chronic wounds like diabetic foot ulcers and chronic compression [46].This pathogen is also linked to the energy metabolism, leading to the production of the short-chain fatty acid butyrate [47,48].Also, colonic microbes have the capability to synthesize B vitamins, supply nutrients to both the host and microbiota, and influence immune cell activity, as well as impact the nervous system through the production of neurotransmitters [16].Thus, based on the flora screened, MDNs may affect cognitive function through a variety of pathways such as the modulation of neuroinflammation and immunity, the maintenance of bile acid homeostasis, the provision of OCM nutrients, energy metabolism, and other effects through the gut microbiota.The full extent of the functions of these flora remains unknown, given the evolving understanding of the gut microbiome in medical contexts.Consequently, the intricate pathways through which gut flora influence brain health have yet to be fully elucidated, necessitating the further exploration of the specific regulatory mechanisms involved.
Hence, in order to seek the potential pathways through which dietary MDNs may impact cognitive function via the gut microbiota, we utilized gut microbiomics for gene function annotation.In our study, it was observed that metabolic pathways associated with OCM, including Folate biosynthesis, Sulfur metabolism, Taurine and hypotaurine metabolism, and Glutathione metabolism, among others, were found to be significantly enriched in the D group.Conversely, the Cysteine and methionine metabolism pathway exhibited significantly lower enrichment in this group.Interestingly, these findings indicate that a deficiency in MDNs may lead to inadequate substrates for crucial reactions in the OCM pathway, subsequently increasing the body's requirement for MDNs and prompting compensatory alterations in OCM-related pathways [49].Additionally, the heightened oxidative stress and buildup of harmful substances associated with neurodegenerative conditions could amplify the body's response to stress [50,51].These indicate that the disruption of the complex regulatory network responsible for maintaining the OCM due to a deficiency in any individual MDN contributes to the accelerated progression of CI [16,52].
Additionally, some studies have shown that there are interactions between MDNs (folate and VB 12 ) and the gut microbiota.Specific flora within the gut have been found to contribute significantly to the daily intake of folic acid, as well as various B vitamins including biotin, cobalamin, niacin, pantothenate, pyridoxine, riboflavin, and thiamine [53].Conversely, diets lacking in folate and VB 12 over an extended period can result in the reduced production of B vitamins by gut bacteria [25].Based on the previous finds [25,53], the research indicates that a deficiency in MDNs may lead to a reduction in B vitaminproducing bacteria, which might impaire OCM in brain tissue and subsequent impacts on the nervous system.Notably, the significantly higher gene enrichment of the Lipopolysaccharide biosynthesis pathway was observed in the D group compared to the DF group.Porphyromonas was more abundant in HQ individuals compared to LQ individuals.It has been reported that folic acid supplementation reduces gut inflammation and mitigates oxidative stress and neurotoxicity, ultimately protecting neurons from damage [54].Thus, it is suggested that high intake levels of MDNs may potentially help to slow down the progression of CI through reducing inflammatory flora, thereby attenuating chronic neuroinflammation, which was accordant with the previous report [54].Furthermore, our study revealed a decrease in the gene enrichment of six pathways associated with DNA synthesis and repair, specifically Mismatch repair, Homologous recombination, DNA replication, Pyrimidine metabolism, Base excision repair, and Nucleotide excision repair, in the D group as compared to the DF group.Additionally, it is important to note that DNA synthesis and repair processes are intricately linked to OCM [55].In a state of folate deficiency, the aberrant activation of serine hydroxymethyltransferase 1 (SHMT1) disrupts thymidylate biosynthesis, potentially resulting in cognitive dysfunction [56].Taken together, the study may provide some clue that the modulation of intestinal flora homeostasis, the maintenance of OCM, the reduction in inflammation and oxidative stress, and the regulation of DNA synthesis and repair by MDNs may have a positive effect on cognitive function (see Figure 6 for illustration).Nonetheless, the intricate interplay among these pathways remains incompletely understood, necessitating additional mechanistic investigations.
as a mildly lower MNQI in the MCI group than the NC group.In contrast, the MNQI of the D group was significantly lower than both the MCI and NC groups.This, in conjunction with the reversible nature of MCI, underscores the significance of early intervention with MDNs during the MCI stage to mitigate the risk of progression to dementia [2].In addition, our study revealed that the consumption of MDNs among the elderly fell significantly below the recommended levels (see Supplementary Tables S2 and S3).This deficiency of MDNs may be particularly pronounced in the neurological system as digestive and absorptive capabilities weaken with age.Consequently, it is advised that elderly individuals consider taking appropriate preventive measures, such as supplementation with MDNs, to potentially mitigate the decline in cognitive function.We applied the MNQI for the first time to assess the relationship between the MDNs' intake and cognitive function in older adults and explored the potential mechanisms of the gut microbiome, and screened for specific genera of bacteria, which has rarely been Furthermore, in order to illustrate the significance of early intervention, an analysis was conducted on the distribution of the MNQI among D, MCI, and normal cognition (NC) populations.The results from the Supplementary Material revealed that over half of the individuals assessed were at risk of developing MCI, indicating a substantial portion of the population at heightened risk for dementia [57].Moreover, there was no statistically significant difference between the MCI and NC groups in the MNQI, which was reflected as a mildly lower MNQI in the MCI group than the NC group.In contrast, the MNQI of the D group was significantly lower than both the MCI and NC groups.This, in conjunction with the reversible nature of MCI, underscores the significance of early intervention with MDNs during the MCI stage to mitigate the risk of progression to dementia [2].In addition, our study revealed that the consumption of MDNs among the elderly fell significantly below the recommended levels (see Supplementary Tables S2 and S3).This deficiency of MDNs may be particularly pronounced in the neurological system as digestive and absorptive capabilities weaken with age.Consequently, it is advised that elderly individuals consider taking appropriate preventive measures, such as supplementation with MDNs, to potentially mitigate the decline in cognitive function.
We applied the MNQI for the first time to assess the relationship between the MDNs' intake and cognitive function in older adults and explored the potential mechanisms of the gut microbiome, and screened for specific genera of bacteria, which has rarely been reported in prior studies.The present study initiatively utilized the MNQI for the assessment of the association between the intake of MDNs and cognitive function in elderly individuals.Additionally, the investigation delved into the potential mechanisms involving the gut microbiome and conducted a screening for specific bacterial genera, a topic that has received limited attention in prior research.Nevertheless, this study is constrained by certain limitations.Specifically, the sample size and study design are inadequate, leading to limited statistical validity, particularly in the realms of bacterial screening and functional prediction.Moreover, for community-based studies, this study is unable to obtain conclusive causation and segmented clinical diagnoses or treatment recommendations, as in clinical trial studies.Consequently, the present findings primarily serve as a foundation for future research.To more precisely elucidate the dynamic interplay between the nutrient intake, gut microbiota, and cognitive function, it is advisable to conduct future longitudinal studies with larger sample sizes.Furthermore, the inability to obtain the serum exposure levels of MDNs in the current study due to methodological constraints necessitates careful consideration in future research endeavors.Additionally, it is recommended that upcoming studies integrate multi-omics data to elucidate the effect of the gut microbiota on cognitive function, including its role in processes such as one-carbon metabolism, nutrient metabolism, immune modulation, and DNA synthesis.Ultimately, the gut flora identified through the screening conducted in this study offers preliminary insights for directing gut flora interventions aimed at enhancing cognitive function.It is advisable that future research endeavors encompass gut flora-focused animal or population intervention trials to substantiate their efficacy.
Conclusions
In conclusion, our research offers initial findings on the possible relationship between MDNs' consumption and CI in the elderly, highlighting the significant influence of the gut microbiota composition and diversity in this correlation.The study underscores the importance of the early adoption of MDNs to preserve cognitive function in older individuals, and identifies specific gut flora strains that may provide positive help to the cognitive performance.These findings will help us obtain a deep understanding of the intricate relationship between the gut microbiota balance and neurological well-being in the aging population.
3. 3 .
Potential Role of Intestinal Flora in the Association between MDNs and Cognitive Function 3.3.1.Differences in Intestinal Flora between Groups D and DF
Figure 1 .
Figure 1.Distribution of identified intestinal flora in dementia (D) and dementia-free (DF) groups.(A) Venn diagram showing the distribution of OTUs between D and DF group.(B) Intestinal flora composition of the two groups at the phylum level.
Figure 2 .Figure 1 .
Figure 2. Comparisons of intestinal flora diversity between dementia (D) and dementia−free (DF) groups.(A) Chao1 index, (B) ACE index, and (C) Shannon index for assessing alpha diversity.p−value indicated differential clustering assessed by Wilcoxon rank−sum test.The beta diversity based on (D) unweighted and (E) weighted UniFrac distances analysis between the two groups.(F)
Figure 1 .
Figure 1.Distribution of identified intestinal flora in dementia (D) and dementia-free (DF) groups.(A) Venn diagram showing the distribution of OTUs between D and DF group.(B) Intestinal flora composition of the two groups at the phylum level.
Figure 2 .
Figure 2. Comparisons of intestinal flora diversity between dementia (D) and dementia−free (DF) groups.(A) Chao1 index, (B) ACE index, and (C) Shannon index for assessing alpha diversity.p−value indicated differential clustering assessed by Wilcoxon rank−sum test.The beta diversity based on (D) unweighted and (E) weighted UniFrac distances analysis between the two groups.(F)
Figure 2 .
Figure 2. Comparisons of intestinal flora diversity between dementia (D) and dementia−free (DF) groups.(A) Chao1 index, (B) ACE index, and (C) Shannon index for assessing alpha diversity.p−value indicated differential clustering assessed by Wilcoxon rank−sum test.The beta diversity based on (D) unweighted and (E) weighted UniFrac distances analysis between the two groups.(F) PLS−DA analysis of intestinal flora between the two groups.p-value indicated differential clustering assessed by ADONIS test.* p < 0.05, ** p < 0.001.
Figure 3 .
Figure 3. Comparisons of intestinal flora diversity between the LQ and HQ groups.The beta diversity based on (A) unweighted and (B) weighted UniFrac distances analysis between HQ and LQ group.(C) PLS−DA analysis of intestinal flora between HQ and LQ group.** p < 0.001.
Figure 3 .
Figure 3. Comparisons of intestinal flora diversity between the LQ and HQ groups.The beta diversity based on (A) unweighted and (B) weighted UniFrac distances analysis between HQ and LQ group.(C) PLS−DA analysis of intestinal flora between HQ and LQ group.** p < 0.001.
Figure 4 .
Figure 4. Heatmap clustering plot of correlations between the relative abundance of the initially screened genera and MDNs' intake.Red: correlation is positive, blue: correlation is negative.
Figure 4 .
Figure 4. Heatmap clustering plot of correlations between the relative abundance of the initially screened genera and MDNs' intake.Red: correlation is positive, blue: correlation is negative.
Figure 5 .
Figure 5. Significant differences between the D and DF groups at the third level of the KEGG pathway, p < 0.1.Compared with the DF group, the relative abundance of (A) up−regulated pathways
Figure 5 .
Figure 5. Significant differences between the D and DF groups at the third level of the KEGG pathway, p < 0.1.Compared with the DF group, the relative abundance of (A) up−regulated pathways and
Figure 6 .
Figure 6.The potential mechanisms of dietary MDNs for cognitive function improvement through gut microbiota.
Figure 6 .
Figure 6.The potential mechanisms of dietary MDNs for cognitive function improvement through gut microbiota.
Table 2 .
Differences in MDNs intake between dementia (D) and dementia-free (DF) groups.
* p < 0.05, according to the Mann-Whitney U test.
Table 3 .
Correlations between the relative abundance of screened genera and MoCA, MNQI, and MDNs' intake.
Table 3 .
Correlations between the relative abundance of screened genera and MoCA, MNQI, and MDNs' intake.CLR-
< 0.1, according to the Spearman rank correlation analysis test.3.3.4.Predicting Possible Mechanisms by which the Screened Genera Play a Role in Mediating the Association of MDNs with Cognitive Function | 8,936.2 | 2024-06-28T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Study on the L–H transition power threshold with RF heating and lithium-wall coating on EAST
The power threshold for low (L) to high (H) confinement mode transition achieved by radio-frequency (RF) heating and lithium-wall coating is investigated experimentally on EAST for two sets of walls: an all carbon wall (C) and molybdenum chamber and a carbon divertor (Mo/C). For both sets of walls, a minimum power threshold Pthr of ~0.6 MW was found when the EAST operates in a double null (DN) divertor configuration with intensive lithium-wall coating. When operating in upper single null (USN) or lower single null (LSN), the power threshold depends on the ion ∇B drift direction. The low density dependence of the L–H power threshold, namely an increase below a minimum density, was identified in the Mo/C wall for the first time. For the C wall only the single-step L–H transition with limited injection power is observed whereas also the so-called dithering L–H transition is observed in the Mo/C wall. The dithering behaves distinctively in a USN, DN and LSN configuration, suggesting the divertor pumping capability is an important ingredient in this transition since the internal cryopump is located underneath the lower divertor. Depending on the chosen divertor configuration, the power across the separatrix Ploss increases with neutral density near the lower X-point in EAST with the Mo/C wall, consistent with previous results in the C wall (Xu et al 2011 Nucl. Fusion 51 072001). These findings suggest that the edge neutral density, the ion ∇B drift as well as the divertor pumping capability play important roles in the L–H power threshold and transition behaviour.
Introduction
The high-confinement mode, i.e. H-mode, is the baseline operational scenario for ITER [1]. Since the power available in the initial phase of ITER operation is limited (~73 MW), accessing the H-mode with a heating power as low as possible is a crucial issue.
The capability of the flexible poloidal magnetic field control system in EAST can accommodate operation with different divertor configuration operations. In order to demonstrate high performance, long pulse plasma and in preparation for ITER, the carbon wall is being replaced by a metal wall step by step. The full carbon wall in the 2010 campaign (C wall) was replaced by molybdenum in the main chamber and carbon in the divertor in the 2012 campaign (Mo/C wall) [2]. EAST now routinely uses lithium coating as a main wall conditioning technique [3]. In addition, EAST has ITER-like heating schemes, i.e. dominated by electron heating from a lower hybrid current drive (LHCD) and ion cyclotron resonance heating (ICRH). These enabled capabilities make EAST a unique platform for the low/high (L-H) power threshold studies.
A theory-based prediction for the H-mode power threshold (P thr ) for ITER is still absent due to the ambiguous understanding of the physics of the L-H transition. The latest and widely used ITPA threshold scaling is P thr,08 = 0.0488 n e 0.72 B T 0.80 S 0.94 (MW), where n e is the central line-averaged electron density in 10 20 m −3 , B T is the magnetic field in T, and S is the plasma surface area in m 2 [4]. However, ongoing research from many devices found that other 'hidden variables' affect the L-H power threshold. The impact of divertor configuration on P thr is found in several devices, such as NSTX [5], JET [6] and EAST [7]. Recently, the L-H power threshold studies on JET [8] found that the P thr is lower when the PFCs are fully covered by metal. Motivated by these observations, the impacts of divertor geometry and lithium wall conditioning for two sets of wall materials on the L-H transition and P thr are studied in detail in EAST.
The paper is organized as follows: section 2 presents the experimental setup and a global description of the experiments. The main results in the previous L-H threshold experiments with the C wall in the 2010 experimental campaign are described in section 3. Section 4 addresses the findings obtained in 2012 with the Mo/C wall. A summary and discussion are presented in section 5.
Experimental setup
EAST is a medium size full-superconducting tokamak with a major radius R = 1.9 m and a minor radius a = 0.5 m. EAST has an ITER-like D shaped poloidal cross-section with a plasma area ~1 m 2 , plasma volume ~11 m 3 and plasma boundary surface area ~40 m 2 . From the initial operation phase up to 2010, EAST was equipped with a 2 MW LHCD and 1.5 MW ICRH systems. These two systems were upgraded to 4 MW and 6 MW respectively during its shutdown in 2011/2012. A detailed description of LHCD and ICRH experiments can be seen in [2]. During the 2010 and 2012 campaigns, EAST was equipped with one in-vessel cryopump which significantly enhanced the particle exhaust and capacity of the recycling control in H-mode discharges with a nominal pumping speed for deuterium of ~75.6 m 3 s −1 [2], which is located underneath the lower outer divertor target, as shown in figure 1. In order to avoid damaging the water-cooling pipes behind the pumping slots, the strike points on the divertor targets were shifted a few centimeters away from the slots in the 2012 campaign, relative to the 2010 campaign.
The discharges were conducted under the lithium-coated wall conditioning and detailed discussions of the lithium coating between the two campaigns can be found in Zuo et al's paper [3]. Although multiple L-H transitions can occur in a shot, only the first L-H transitions are studied in this paper. The L-H power threshold comparison is conducted by three different kinds of divertor configurations, i.e. lower single null (LSN), double null (DN) and upper single null (USN). In the 2010 campaign, the toroidal magnetic field, B T , and the plasma current, I p , were both in the anticlockwise direction viewed from the top (B × ∇B↑). In the 2012 campaign, there were few shots with B T in a clockwise direction (B × ∇B↓). The I p keeps the same direction. Note that, unfortunately, we have made a mistake with the field direction in the 2010 campaign (reference [9], the third paragraph on p 2). Here we have double checked the direction of B T and can deduce the field direction through the E × B flow velocity with the toroidal and poloidal rotation direction measured by the magnetic coils. In addition, the pinch angle of ELM filaments at the low field side gives us another approach to check the field direction. To make sure of this, the directions were directly measured utilizing a compass at multi-points at the vicinity of the EAST tokamak in the 2012 campaign. In this study, the L-H transitions are identified by the sudden drop of divertor Dα emission measured by a photodiode array and the increase of the line-averaged electron density n e measured by the central chord of the far infrared (FIR) interferometer. The lines of sight of the diagnostics are also shown in figure 1.
In this article, discharges were operated in deuterium with divertor configurations and all the L-H transitions in the plot occurred during the plasma flat top. The net power across the separatrix at the L-H transition, P loss , is calculated as, P loss = P aux + P ohmic − P rad − dW dia /dt, with P aux the absorbed auxiliary heating power from either one of the LHCD and ion cyclotron range of frequency (ICRF) heating, or combined power from both heating schemes (taking into account reflections and absorptions of the different heating schemes), P ohmic the Ohmic power, P rad the radiated power from the bulk plasma measured by absolute extreme ultraviolet (AXUV) arrays and dW dia /dt the change rate of the diamagnetic stored energy [9,10]. To improve the LHW coupling, the outer gap was optimized by isoflux plasma boundary control and local gas puffing near the LHW launcher. With these efforts the average reflection coefficient stayed low-in the range <10%. For LHCD, a combination of ray-tracing and Fokker-Planck calculations by using the C3PO/LUKE codes with so called spectral tail model are performed in order to identify the LH power deposition. It should be pointed out that the absorption coefficient in the experiment should be smaller than the theoretical value due to the spectral broadening. In the experiments, the absorbed fraction of the LH power can be determined with a method utilizing the time derivative of the total stored energy [11]. As the hydrogen concentration was reduced down to below 10%, effective ICRF heating was observed. The ICRF coupling efficiency can be simply estimated by calculating dW dia /dt at the time of the ICRF power turn-on [12]. The estimated error bars of P loss for the dataset are 10-20% of their own values. Most of the data have been obtained with a marginal input power in the C wall, which largely reduces the source of experimental uncertainties due to dW dia /dt. However, the dW dia /dt contribution might be larger due to the combined heating scheme of LHCD and ICRF in Mo/C wall studies. All global parameters are averaged over a time interval of 10 ms in the L-mode or intermediate phase just before the time of the L-H transition, where the impurity concentrations are low and the radio of radiation power to heating power is rather low. In this study, the L-H power threshold is studied in terms of two types of transitions, labeled as single-step L-H and dithering L-H transition, which can be identified by characterizing divertor Dα emission prior to the final transition. In the following, unless otherwise specified, small-amplitude limit cycle oscillations (LCOs) refer to the intermediate phase preceding the single-step L-H transition, and dithering cycles or the dithering phase represents the intermediate phase prior to the dithering L-H transition.
H-mode access and the typical L-H transitions
In the 2010 campaign, the first H-mode plasma appeared after strong lithium-wall conditioning both by lithium evaporation and real-time lithium powder injection at the plasma edge. The H-mode plasmas, typically with an H factor of H IPB98(y, 2) ~ 1, were obtained with ~ 1 MW lower hybrid wave power. In this campaign about 485 H-mode discharges were obtained. Stationary ELMy H-mode plasmas up to 6.4 s were produced in a wide range of operation parameters: B T = 1.4 -2 T, I p = 0.4 −0.8 MA, n e = (1.9 -3.4) × 10 19 m −3 , plasma surface area S = 38 -42 m 2 with DN (35% of H-mode shots) or intentionally unbalanced DN configurations (32% of H-mode shots) as well as LSN divertor configuration (13% of H-mode shots) with the ion ∇B drift towards the upper divertor. However, no H-mode was obtained with USN configuration. This is probably due to the limited heating power available then. In addition, there were 20% of L-H transitions that occurred during the plasma ramping phase, which is not discussed further here.
The small-amplitude LCOs prior to the L-H transitions firstly appeared in the 2010 campaign. These LCOs, characterized by small-amplitude (RMS/MEAN ~ 3%) and high frequency (up to 4 kHz) appearing in the target Dα emission signals with the input heating power very marginal to the trans ition threshold, were first demonstrated in [13]. The associated L-H transitions are single-step L-H transitions, which are typically characterized by a single-step reduction both in ion saturation current from the divertor target embedded Langmuir probes (figures 2(a) and (d)) and the divertor Dα emission (figures 2(b) and (e)). Some single-step L-H transitions are preceded by small-amplitude LCOs with a duration of cycles phase on ~100 ms time scale, with a frequency around 2 kHz, as shown in figure 2( f ). Note that these singlestep L-H transitions can be found both in DN and LSN configurations, with the ion ∇B drift away from the major X-point.
L-H power threshold and divertor configuration effects
All the L-H transitions were accessed by predominantly LHCD heating in 2010. In figure 3(a) the power across the separatrix P loss through the separatrix at L-H transitions are plotted versus the threshold powers P thr,08 predicted by the international tokamak scaling, showing that it basically follows the scaling in a limited region of operation windows. In figure 3(b) P loss are plotted as a function of n e All data available are in the low density range, n e = 1.9-3.4 × 10 19 m −3 . We did not see a clear density roll-over. This may be attributed to the very limited heating power which will be discussed in the following section.
Data has been sorted in terms of different divertor configurations to investigate its effects on the L-H power threshold (I p = 0.6 MA, B T = 1.56-1.78 T, n e = 1.9-3.2). In figure 4 P loss at the L-H transition are plotted versus P thr,08 with respect to different divertor configurations, showing that a minimum heating power of ~0.6 MW is needed for the H-mode access in the DN configuration, which is lower in comparison with that in the LSN configuration. Note that there are four points below 0.8 MW of the P loss for the LSN configuration, all of which were achieved after fresh wall conditioning (L-modes without transtion at a level of P loss ~ 0.8 MW can easily be found at similar plasma conditions). In addition, the H-mode cannot be accessed in the USN configuration with similar heating power, indicating a lower power threshold for the ion ∇B to drift away from the major X-point. This observation has been confirmed in a series of dedicated experiments (B × ∇B ↓) conducted in 2012 [7,14]. This is in contrast to other tokamaks [15][16][17][18][19][20]. In addition, all the type-I ELMy H-modes were obtained in the LSN configuration (B × ∇B ↑) in the 2012 campaign, suggesting that LSN with the ion ∇B drift away from the major X-point appears to facilitate achievement of highperformance stable H-mode plasmas on EAST. The internal cryopump is located underneath the lower divertor, which may help to reduce edge neutral density in the LSN configuration. On the other hand, a lower power threshold was observed in the USN configuration with B × ∇B ↓in the 2012 experiments [7,14,21], where the strike points on the divertor targets were shifted a few centimeters away from the pumping slots. Detailed discussions are made in section 4.2.
Role of edge neutral density on P thr
The importance of edge neutral particles or recycling to the L-H transition power threshold has been realized for a long time [22][23][24][25][26][27][28][29]. The neutral density is usually regarded as one of the most relevant 'hidden variables' behind the transition. The ion rotation and radial electric field E r at the plasma edge is supposed to be strongly dependent on charge-exchange momentum loss if the edge neutral density is high enough. Previous analysis [9] of the EAST H-mode results in 2010 with the C wall suggests that the neutral density near the lower X-point could be a key 'hidden variable' which affects the transition and power threshold. To access the H-mode on EAST, extensive lithium wall coating by evaporation was conducted every day before experimental operation. The neutral density in the edge plasma was estimated based on the divertor Dα emission measured by a photodiode array [30]. It was found that the neutral density near the lower X-point was reduced by a factor of four with heavy lithium wall coating, while the required minimum heating power to access the H-mode was gradually reduced from 0.6 MW to ~0.5 MW. Note that before the application of lithium coating no H-mode can be achieved.
H-mode access
In the 2012 campaign, EAST's capabilities have been significantly enhanced [2]. Graphite tiles on the low heat load area of the main chamber were replaced by molybdenum with the divertor remaining unchanged (Mo/C wall). In addition, the LHCD and ICRH systems were upgraded to 4 MW and 6 MW, respectively. The lithium evaporation system was upgraded to improve the coating uniformity, leading to a significant reduction in hydrogen concentration from ~10% down to ~3%, which allows for more effective ICRF heating with the minority heating scheme. With these enhanced capacities, H-modes with small-ELMs over 30 s have been achieved [2,31]. Note that before the application of lithium wall coating, no access to the H-mode can be achieved. In this campaign about 1371 H-mode discharges were obtained across a wide range of operation parameters: B T = 1.33-2.07 T, H-modes were produced with different divertor configurations, including DN or unbalanced DN (75% of H-mode shots) as well as LSN (21% of the shots) and USN (4% of the shots).The L-H transitions in 16% of H-mode shots occurred during the configuration transient process from DN to LSN or USN configuration. There were 4% of L-H transitions that occurred during the plasma current ramping phase which are not included in the following analysis. The oscillation amplitude (RMS/MEAN) for the single-step L-H transition is rather small, typically only of ~3% in target Dα signals and the divertor probe ion saturation signals, which is much smaller than that in the dithering L-H transition (~30%) (figure 6). These single-step L-H transitions are very similar to the L-H transitions which appeared in 2010, in which the oscillation amplitude sometimes slowly increased when approaching the transitions [13]. Significant magnetic perturbations, |δB P | ~ 1 G, have been detected by the Mirnov coils, associated with the 'dithering' or 'I-phase' in the dithering L-H transition which is normally obtained in the double null configuration [10]. The magnetic perturbations are much weaker when associated with the small-amplitude LCOs in the single-step L-H transition, which are normally obtained in the lower single null configuration. However, there is no significant difference in the L-H power threshold between these two types of transitions, indicating that the effect of rotating MHD on the L-H transition is not significant. On the other hand, recent experiments in HL-2A show that a kink-type MHD mode routinely occurs and crashes rapidly just prior to the I-H transition, which finally triggers the transition [32]. More experiments are required to investigate the effect of the rotating MHD on the L-H threshold power.
In figure 7(a) P loss are plotted versus P thr,08 for dithering L-H transitions and single-step L-H transitions, showing that the low boundary follows the scaling with P loss in a range of 0.6-1.7 times of P thr,08 . As shown in the figure, the loss powers P loss show no significant difference between the dithering L-H transitions and the single-step L-H transitions. However, the occurrence of these two types of L-H transitions strongly depends on the divertor configuration. Detailed experiments of the effects of divertor configuration on H-mode access are discussed in the following section. In figure 7(b) normalized powers P loss / P thr,08 are plotted as a function of n e . All the H-modes were operated in a low density range, n e = 1.4-3.6 × 10 19 m −3 . The low density dependence of the power threshold, namely an increase below a minimum density n e,min , was identified for the first time on EAST with the Mo/C wall. The minimum density n e,min is at about 2 × 10 19 m −3 , close to the minimum density limit for H-mode access in the 2010 campaign. This may be a reason why the density rollover was not observed afore-time. The physics mechanisms underlying the low density dependence of the power threshold remain largely unknown. Recent results from JET with a Be/W and C wall show a correlation of the density roll-over to divertor geometry and divertor/wall materials [8]. On the other hand, experiments from ASDEX Upgrade demonstrate the key role played by the edge ion heat flux as an explanation for the strong increase in the L-H power threshold at low density with electron cyclotron resonant heating (ECRH) as the auxiliary heating power [33]. These findings show that the occurrence of the low density dependence of the power threshold is not a universal feature in various tokamaks. More dedicated experiments in different heating schemes are required which will be available on EAST in 2015.
Influences of divertor configuration on the L-H power threshold and transition behaviour
EAST has a flexible poloidal magnetic field control system to accommodate different divertor configurations. Plasma configuration can change from one to another smoothly during single discharge. H-mode plasmas were obtained with different divertor configurations including LSN, USN and DN and different B T directions. Figure 8 Here, dR sep is the distance between the primary and secondary separatrixes at the outer midplane. Therefore, a value of dR sep = 0 indicates a perfectly balanced DN configuration while |dR sep | < 0.01 indicates an unbalanced DN configuration and a value of dR sep < −0.01 and dR sep > 0.01 indicates an LSN and USN configuration, respectively. As shown in figure 8(a), a minimum threshold value of P loss ~ 0.6 MW is found in the DN configuration while a lower power threshold is required in the LSN compared to the USN configuration (B × ∇B ↑). For B × ∇B ↓, the lowest threshold ~1.2 MW is also found in the DN configuration; both dithering L-H transitions and single-step L-H transitions are obtained in USN and DN, but cannot be accessed in the LSN configuration, which suggests that the power threshold is higher than the available power level with the ion ∇B drift towards the major X-point. To exclude the low density dependence of the L-H power threshold as mentioned before, the density range above 2.1 × 10 19 m −3 from the data set in figure 8(a) is selected and the normalized power threshold versus dR sep is shown in figure 8(b). It shows that there is a similar power threshold dependence for the ion ∇B drift direction towards the upper divertor, in which a minimum threshold is found in the DN configuration and a lower threshold is required in the LSN compared to the USN configuration. However, the dependence is not clear for the other side drift direction, where DN and USN share a similar power threshold. For DN, the power threshold is lower for B × ∇B ↑ when compared with that for B × ∇B ↓. Dedicated experiments are required to test the situation for B × ∇B ↓ which will be shown in the following paragraph. In addition, most dithering L-H transitions were obtained in the DN configuration for both B T directions, while almost all single-step L-H transitions were obtained in the LSN configuration for B × ∇B ↑ and in the USN configuration for B × ∇B ↓, which is normally pointing away from the major X-point for the ion ∇B drift on EAST. However, no significant difference of required heating power was found between dithering L-H transitions and single-step L-H transitions. This is further proved in a dedicated experiment with two adjacent shots running at the DN and LSN configuration with similar initial target plasmas, as illustrated in figure 9 (DN) and figure 10 (LSN). The dithering L-H transition and single-step L-H transition driven by ~0.5 MW LHCD and ~ 0.3 MW ICRH heating at a line-averaged electron density ~ 2.1 are obtained, respectively. To further investigate this, an experiment has been conducted in a single-shot with varying divertor configurations. As shown in figure 11, two L-H transitions were obtained in this shot with similar conditions while the configuration varied from DN to LSN (note that the density is even higher in the second L-H transition with the LSN configuration). The dithering L-H transition occurred in the DN configuration and the single-step transition appeared in the LSN configuration, which demonstrates the configuration dependence for dithering cycles preceding the L-H transition.
Not only the occurrence of the two types of L-H transitions strongly depends on the divertor configuration, but also the behaviour of the dithering cycles at the dithering L-H transitions significantly differ from one configuration to another. Figure 12 shows the target Dα signals of three typical shots for different divertor configurations, measured by an 18-channel vertical chord of Dα photodiode arrays (PDA) as shown in figure 1. The dithering cycles in the DN configuration are mostly regular large-amplitude oscillations, typically exhibiting a clear and sharp transition from the L-mode to the I-phase, as shown in figure 12(b). The duration of dithering cycles with the USN configuration (normally B × ∇B ↑) is usually very short, accompanied by only a few limits cycles ( figure 12(c)). The dithering cycles with LSN configuration (normally B × ∇B ↑) are more irregular in frequency and amplitude, and usually appear with increasing amplitude until the final transition to the ELMy-free H-mode ( figure 12(a)). above are significantly reproducible for different divertor configurations. The magnetic configurations and main plasma parameters just prior to the L-H transitions of these three shots are shown in figure 13, with the plasma elongation factor κ, upper and lower triangularity, δ u and δ l , and the edge safety factor q 95 listed. The power across the separatrix P loss is also listed, indicating similar power injection for the three shots.
Although the appearance of the two types of L-H transitions shows no significant dependence on heating power, the duration of the dithering phase of the dithering L-H transition directly relates to it [34]. In EAST, dithering L-H transitions, dominantly heated by radio-frequency power, i.e. LHCD and/or ICRH, were selected to further investigate this issue (B T /I p ~ 1.8 T/0.4 MA, DN). We take the excess power divided by the confinement time τ E as an estimation of the ramp rate γ P of power in excess of the L-H transition power threshold at the separatrix, expressed as γ P = (P loss − P thr,08 ) /P loss /τ E . Figure 14 plots the duration of the dithering phase occurring at the L-H transition versus the ramp rate γ P . It shows that the duration of the dithering phase decreases with increasing γ P . To illustrate the effect, figure 15 shows three adjacent shots with similar plasma density ( figure 15(c)), and other parameters prior to the L-H transitions. The Dα signals have been shifted on the vertical axis to avoid overlap in the plot ( figure 15(a)). A progressive increase of the auxiliary RF heating power was conducted between these shots ( figure 15(b)), resulting in an increase in γ P . Due to this, a decrease of the duration of the dithering phase was found ( figure 15(a)). Meanwhile, the L-H transitions which occurred earlier correspond to higher power injection.
Dedicated experiments have been performed to further investigate the effects of divertor configuration on the L-H power threshold on EAST. Figure 16 shows three adjacent discharges with a similar initial target plasma density and input power from LHCD and ICRH under different divertor configurations for the ion ∇B drift towards the lower divertor. H-modes are achieved in the DN configuration (42023) with dR sep ~ 0 and the USN configuration (42024) with dR sep ~ 1.5 cm, as evidenced by the appearance of ELMs seen in the divertor Dα emission and the increase in line-averaged electron density. Note that the coupling power from LHCD is strongly reduced in DN after the L-H transition, while the H-mode can still be maintained for a long period at a higher density compared with the USN and LSN configuration. In contrast, the plasma remains in L-mode (shot 42022) for the LSN case with dR sep ~ −1.5 cm, suggesting that the power required for the L-H transition is lower with the ion ∇B drift away from the major X-point on EAST, which is in contrast to other tokamaks [15][16][17][18][19][20]. As mentioned above, EAST has only one in-vessel cryopump located underneath the lower divertor for providing the main pumping. However, the LSN configuration appears to have a higher power threshold. This indicates that the pumping capability is not so much sufficient to affect the L-H transition power threshold as the strike points on the divertor targets were shifted a few centimeters away from the pumping slots, as mentioned before.
Effects of lithium wall conditioning on P thr
To facilitate density control and reduce edge recycling for long-pulse operations, extensive efforts have been made in developing wall conditioning techniques [2,3,21]. In particular, we have explored various lithium coating techniques to enhance the uniformity of lithium coverage on the wall. In EAST, lithium wall conditioning has become routine and is the most effective wall conditioning technique to reduce neutral recycling. Compared with the 2010 campaign, similar techniques of lithium coating application, i.e. lithium evaporation and lithium powder injection, were used in the 2012 campaign [3]. During the 2012 campaign, 15-45 g lithium was evaporated every day using three ovens. In addition, fine lithium powder was dropped in 47 H-mode shots. After shot 39637 (with ~0.765 kg of lithium accumulated) the EAST vacuum chamber had to be opened because of the breakdown of the fast control coil. Lithium coverage was cleaned up then. From the recovered experiment, strong lithium-wall coating was conducted with 1.613 kg of new lithium accumulated in total. To investigate the effects of lithium wall conditioning on the L-H transition threshold, H-modes with dithering L-H trans itions under similar conditions (I p = 0.28-0.6 MA, B T = 1.33-2.0 T, DN) are studied. We use the Dα emission near the lower X-point measured by a photodiode array divided by central line averaged electron density (Dα lower_X-point /n e ) as an indicator of the neutral density in the divertor region, as indicated in figure 1 (Dα (1-35a), red lines). The neutral density does not show an obvious downward trend when the lithium accumulation is less than 0.8 kg. However, the neutral density appears to gradually decrease after about 0.8 kg of lithium accumulation, as shown in figure 17(a). In addition, it is found that the power across the separatrix P loss increases with increasing neutral density while the power threshold P thr,08 is nearly unchanged with the similar initial plasma conditions (n e , B T , S), as indicated in figure 17(b). A reduced transition power threshold was observed with reduced edge neutral density, indicating that the edge neutral density near the X-point could be a key 'hidden variable' behind the H-mode access, which is consistent with previous experimental findings in EAST with the C wall [9]. Therefore, EAST experiments indicate that the low recycling regime with reduced edge neutral density enabled by lithium wall coating facilitates H-mode access, both for the C wall and Mo/C wall.
Role of field-dependent SOL parallel flow on the transition threshold
The ion ∇B drift direction affects the plasma flow along field lines in SOL. Measurements of the SOL parallel plasma flow (TCV [35], JET [36]) clearly indicate that there is a tendency for the plasma to flow along field lines in the co-current direction in SOL on the low field side near the midplane. These flows tend to reverse when the magnetic field is reversed. Similar results for the SOL flow reversal have been observed in the JT-60U tokamak by Mach probe measurements for two ion ∇B drift directions, suggesting that the ion Pfirsh-Schlüter (PS) flow is the most feasible mechanism to drive the parallel plasma flow at the midplane against the ion ∇B drift direction [37]. On EAST, the characteristics of measured SOL parallel plasma flow under various discharge conditions on the low field side midplane are consistent with the PS flow, with measured parallel Mach number M || = 0.3 − 0.5, as shown in figures 12 and 14 in [38,39]. These findings suggest that the SOL parallel flow on the low field side midplane is dominated by the PS flow and that the direction of the PS flow component of the SOL parallel flow depends on the ion ∇B drift direction.
To illustrate the configuration dependence of the transition threshold, a schematic diagram of the SOL parallel plasma flow in the LSN and USN configuration for two toroidal magn etic fields B T directions is shown in figure 18. The cases of counter-clockwise (B × ∇B ↑) and clockwise (B × ∇B ↓) directions of the field B T with the direction of plasma current I p remaining unchanged (counter-clockwise) are shown in figures 18(a)-(d ), respectively. As mentioned above, the L-H power threshold P thr is lower for the ion ∇B drift direction away from the major X-point than the opposite ion ∇B drift direction, i.e. a lower P thr was found in figure 18(a) than that in figure 18(b) and a lower P thr was observed in figure 18(d ) than that in figure 18(c), accordingly. A simple diagram of the SOL flow is illustrated: as driven by ballooning mode instability, a large part of the particles can cross the separatrix on the low field side due to unfavorable magnetic curvature there (indicated by red arrows), and then these particles move along the magnetic field line and reach divertor targets. The path connection lengths L || of the parallel flow in the SOL, which are dominated by the PS flow on the low field side midplane, are marked by black arrows for each case, with red circles showing the location of the divertor cryopump. The direction of the PS flow component of the SOL parallel flow depends on the ion ∇B drift direction. The velocity of the SOL parallel plasma flow V || at the outer midplane in EAST, which can be evaluated by the parallel Mach number M || (V || = M || * C s , with C s being the ion acoustic speed), is directed downwards (B × ∇B ↑) (figures 18(a) and (b)) and upwards (B × ∇B ↓) (figures 18(c) and (d )). The SOL parallel plasma flow direction at the outer midplane is reversed when the toroidal field changes to a clockwise direction with the same I p in a counterclockwise direction. When both B T and I p in the counter-clockwise direction are viewed from the top, the parallel plasma flow at the outer midplane SOL is directed downwards, i.e. towards the lower outer divertor in the LSN configuration and towards the upper inner divertor in the USN configuration, as shown in figures 18(a) and (b). As shown in the figure, when the ion ∇B drift is directed away from the major X-point, the path connection length L || of the SOL parallel flow is much shorter than that with the ion ∇B drift directed to the major X-point. Therefore, the particle exhausts from the region near the separatrix at the outer midplane are transported to the divertor region in a much shorter timescale τ for the ion ∇B drift directed away from the major X-point due to the shorter path connection length L || of the SOL parallel flow, which can be evaluated as τ = L || / (M || * C s ). Once transported to the divertor region, the particles can effectively be screened and pumped out in the divertor region. This type of field direction-dependent particle transport pattern therefore benefits the particle exhaust. On the other hand, the particle exhaust with a long path connection length of the SOL parallel flow is not as sufficient as a shorter path connection length, since the time-extended cross-field transport in the SOL leads more particles beyond the divertor region which can hardly be pumped out. As mentioned before, the EAST results show that L-H transitions can be influenced by edge recycling and particle exhaust, and particle exhaust may depend on the path connection length of the parallel flow in the SOL. This may explain why the L-H power threshold is lower with the ion ∇B drift away from the major X-point. For the DN configuration, the path connection length of the SOL parallel flow is always short with respect to both the direction of the PS flow component of the SOL parallel flow and the opposite direction on the low field side midplane, despite the ion ∇B drift directions, which are probably coincident with the result of the minimum transition threshold achieved in the DN configuration. Also, the power threshold is lower for B × ∇B ↑ when compared with that for B × ∇B ↓ in the DN configuration, where the direction of the PS flow component of the SOL parallel flow is towards the lower divertor. A recent numerical approach of transition dynamics [40] shows that there is a strong nearly linear decrease in the total heat flux across the separatrix as the characteristic parallel advection loss time τ decreases. Generally, a short connection length makes the SOL more efficient and creates an ion pressure gradient across the SOL necessary to make the L-H transition, in agreement with experimental observations [19,41,42]. Thus, it suggests that field direction-dependent SOL parallel plasma flow boundary conditions may play an important role in the sensitivity of the L-H power threshold to configurations on EAST.
Summary and discussion
The L-H power threshold and transition behaviors have been studied with respect to divertor configuration and lithiumwall coating on EAST for two sets of walls: the C and Mo/C wall. Two types of L-H transitions, the single-step L-H transition and dithering L-H transition, have been identified. The dynamics of these L-H transitions have been discussed in [10] recently, exhibiting similar features of turbulence-flow interactions at the plasma edge evidenced by dedicated probe measurements. In the 2010 campaign with the C wall, the L-H transitions were mostly single-step L-H transitions with a heating power very marginal to the transition threshold predicted by the international tokamak scaling. Some single-step L-H transitions are preceded by small-amplitude LCOs with a frequency around 2 kHz (figure 2). However, both types of L-H transitions were observed in the 2012 campaign with the Mo/C wall, which can be accessed by a similar input power. On the other hand, a decrease of the duration of the dithering phase is found within the dithering L-H transition with increasing input power. These results suggest that the transition behaviour might be different even if input power is marginal to the transition threshold. A recent numerical approach of transition dynamics ( [43], and references therein), which demonstrated that LCOs appear when the input power is close to the transition threshold, should take more variations into consideration. This may not be the effect of the wall changes, since the wall material in the divertor was unchanged and heavy lithium wall coating was conducted in both campaigns. Another difference is that the strike points on the divertor targets were shifted a few centimeters away from the pumping slots in the whole campaign with the Mo/C wall. This shift significantly decreases the divertor pumping capability, compared with experiments with the C wall.
Results from the experiments with the Mo/C wall demonstrated that transition behaviour depends on the divertor configuration and the ion ∇B drift. For the ion ∇B drift towards the upper divertor, dithering L-H transitions were normally obtained in the DN configuration and almost all single-step L-H transitions were obtained in the LSN configuration ( figure 8). Dedicated experiments of two adjacent discharges with similar target plasma conditions also show that the dithering L-H transition occurred in the DN configuration and the single-step L-H transition was accessed in the LSN configuration (figures 9 and 10). This is further demonstrated by dedicated experiments varying configuration from the DN to LSN in a single-shot, in which the dithering L-H transition occurred in DN and the single-step L-H trans ition appeared in the LSN configuration (B × ∇B ↑) ( figure 11). For the ion ∇B drift towards the lower divertor, dithering L-H transitions were frequently observed also in DN, and single-step L-H transitions were normally obtained in the USN configuration, but not in the LSN configuration. In short, the dithering L-H transitions are usually accessed in the DN configuration with reduced divertor pumping capability, regardless of field directions. The single-step L-H transitions are normally obtained in a single null configuration with the ion ∇B drift away from the major X-point, no matter whether the divertor pumping is weak or strong. Thus, weak divertor pumping appears to be necessary for accessing the dithering L-H transition, whereas it does not appear to play a dominant role in setting the L-H transition behaviour. Similar divertor configuration effects are also observed on the Alcator C-Mod tokamak, where the dithering L-H transition is associated with a slot-divertor configuration and the single-step L-H transition is often found with a vertical-plate divertor configuration [44]. All these findings may suggest that there are some missing key ingredients, e.g. divertor configuration, ion ∇B drift direction, etc, in determining the transition behaviour. On the other hand, all the type-I ELMy H-modes were obtained in the LSN configuration (B × ∇B ↑) [14]. LSN with B × ∇B ↑ appears to facilitate the achievement of highperformance stable H-modes in EAST, as the cryopump is located underneath the lower outer target.
The divertor configuration dependence of the transition threshold is studied. For the C wall in the 2010 campaign, the H-mode can be accessed in the DN and LSN configuration for the ion ∇B drift towards the upper divertor, but is not accessible in the USN configuration with a marginal level injection power predominantly heated by LHCD. A minimum power threshold P thr of ~0.6 MW had been observed in the DN configuration ( figure 4). For the Mo/C wall in the 2012 campaign, L-H transitions had been accessed in different divertor configurations, i.e. LSN, DN and USN configuration, and B T directions with enhanced heating power. For the ion ∇B drift towards the upper divertor, a minimum power threshold P thr of ~0.6 MW is found in the DN configuration and a lower power threshold is required in the LSN compared to the USN configuration. For the ion ∇B drift towards the lower divertor, the lowest threshold ~1.1 MW is found also in the DN configuration. L-H transitions were obtained in USN and DN, but cannot be accessed in the LSN configuration ( figure 8(a)). For DN, the power threshold is lower for B × ∇B ↑ when compared with that for B × ∇B ↓. When the analysis is to convert from P loss to P loss /P thr,08 , a similar power threshold dependence can be seen for B × ∇B ↑ while the trend is not clear in the other side field direction ( figure 8(b)). For B × ∇B ↓, dedicated L-H transition experiments by varying configuration at similar initial target plasmas show that the L-H transition can be accessed in the DN and USN configuration, but is not accessible in the LSN configuration with the same heating power (figure 16). As mentioned above, EAST has only one invessel cryopump located underneath the lower outer divertor target for particle exhausting, however, the strike points on the divertor targets were shifted a few centimeters away from the pumping slots for the Mo/C wall. This results in a weak pumping capability which is not so much sufficient to affect the L-H transition power threshold. All of these findings demonstrate that a minimum P thr occurs in the DN configuration, and that a lower power threshold is required with the ion ∇B drift direction away from the major X-point for both the C wall and Mo/C wall on EAST, which is in contrast to other tokamaks [15][16][17][18][19][20]. Note that, in this study, the definition of divertor configuration in terms of small dR sep seems somewhat arbitrary and the presence of a second X-point inside the vacuum vessel and close to the plasma might affect on the edge plasma, wall interaction as well as recycling, which may result in the distinctive findings on EAST. On the other hand, similar results from ASDEX-Upgrade and MAST [19] and studies on NSTX [20] showed a minimum P thr also near balanced DN configuration. These findings show that the occurrence of a minimum in P thr is a universal feature in EAST, regardless of wall materials, pointing to a potential role played by SOL and divertor physics in the L-H transition. A schematic diagram of the SOL parallel plasma flow is discussed, which suggests that field direction-dependent SOL parallel plasma flow boundary conditions may play an important role in the sensitivity of the L-H power threshold to configurations.
Strong effects due to heavy lithium wall coating on the H-mode access have been observed both in experiments with the C wall and Mo/C wall. Similar results were obtained in the Mo/C wall, that neutral density near the lower X-point was progressively reduced by a factor of 2 with increasing lithium accumulation, and that the power across the separatrix P loss decreases with increasing lithium accumulation (figure 17), compared with previous studies in the C wall [9]. The low edge recycling conditions achieved by lithium wall coating is thought to be the key factor behind the low L-H power threshold, as indicated by a significant drop in the neutral density near the lower X-point. All these findings may suggest that low recycling conditions achieved by lithium wall coating facilitate H-mode access on EAST, probably including ITER. The low density dependence of L-H power threshold P thr , namely an increase of P thr below a minimum density n e,min , is identified in the Mo/C wall for the first time, not in the C wall. The minimum density n e,min is close to the minimum density limit for H-mode access with marginal input power in the C wall. This may explain why the low density dependence is not seen in the C wall. Future works are required to test the low density dependence of the power threshold in different heating schemes, such as neutral beam injection heating. In this study, the difference in P loss for different configurations is not so significant only when we consider the source heating power, and the possibility within the difference in absorbed power that is related to the divertor configuration and ion ∇B drift direction needs more investigation. Future works are required to accumulate dedicated experiments and diagnostics, which will be available in the next campaign in 2015. | 10,805.2 | 2016-04-19T00:00:00.000 | [
"Physics"
] |
Fractional Interaction of Financial Agents in a Stock Market Network
In this study, we present a model which represents the interaction of financial companies in their network. Since the long time series have a global memory effect, we present our model in the terms of fractional integro-differential equations. This model characterize the behavior of the complex network where vertices are the financial companies operating in XU100 and edges are formed by distance based on Pearson correlation coefficient. This behavior can be seen as the financial interactions of the agents. Hence, we first cluster the complex network in the terms of high modularity of the edges. Then, we give a system of fractional integro-differential equation model with two parameters. First parameter defines the strength of the connection of agents to their cluster. Hence, to estimate this parameter we use vibrational potential of each agent in their cluster. The second parameter in our model defines how much agents in a cluster affect each other. Therefore, we use the disparity measure of PMFGs of each cluster to estimate second parameter. To solve model numerically we use an efficient algorithmic decomposition method and concluded that those solutions are consistent with real world data. The model and the solutions we present with fractional derivative show that the real data of Borsa Istanbul Stock Exchange Market always seek for an equilibrium state.
Introduction
Complex systems are mathematical structures involving interacting agents at different levels. These interactions emerge from the financial, chemical, social, and computer system entities. In the realm of computational finance, a financial market can be viewed as interacting group of boundedly-rational agents and its fluctuation represent strong nonlinearity and persistent memory. The mathematical tools such as network and graph theories can be used to understand and analyze these systems [1,2]. There are several models expressed in the terms of differential equations in biological complex systems. For instance, the virus models that classify individuals and hosts can be used to analyze spread of a contagion [3][4][5]. Besides, bursting electrical activity in the pancreatic β -cell, population models, and unilingual-bilingual interactions [6], and the interaction of biological species living together [7] can also be modelled by differential equations. However, these models are not only restricted to biological systems. Recent studies show that complex systems involving financial agents have similar structures as systems involving biological agents [8,9]. Therefore, it is reasonable to model the interaction of financial agents as we model the interaction of biological agents.
In a financial market, heterogeneous agents interact through simple investment strategies driven by the investors. In a perfect rational market, information is transmitted continuously and agents adopt their behavior accordingly. Besides, asset prices reflect economic fundamentals. Agents are considered as they only interact though price system. Hence, a complex network where agents are expressed as vertices and edges are formed by the correlation of price fluctuations emerges as a powerful mathematical tool to model such a financial system in the traditional way. In contrast to Keynesian approach, such traditional way takes account of prices of assets as they are only driven by market fundamentals and the role of market psychology is neglected. Even though we use the traditional way to express our model in this study, we need to point out two important classes of investors which are called chartists and fundamentalists in the traditional way of interaction of agents [10]. Chartists tend to look for simple patterns such as trends, past prices, and base and make their investment upon those patterns. Conversely, fundamentalists make their decisions upon the expectation of asset price as moving towards its fundamental value. The fundamentalist investors buy or sell assets that are under or overvalued. The market tends to be dominated by one of fundamentalists or chartists. However, since the behavior of the agents is persistent, the majority of agents switches to the other view at certain point [11,12].
Our approach in this study aims to model interaction of agents in a stock market network in the traditional way. We first use a threshold method to construct a network model where vertices are the companies operating in a stock market and edges are formed by the correlation distance of daily logarithmic returns of stock prices. The dimensionality of the resulting network model would be really high and the patterns that yield power law of degree distributions would be disappeared, however it would involve optimally many edges to characterize community structures. By maximization of the modularity of edges in the network, we can cluster agents into densely connected vertex sets. Then, each cluster has its subdominant ultra-metric structure that is a hierarchical structure with at least one leading actor. We set the number of cluster to two, then assume the investors, even chartists or fundamentalists, start to invest one cluster regarding to factors such that merging, capital augmentation, public flotation, etc. Then, the investors in the other cluster start to sell assets to get the capital to invest increasing valued assets. Therefore, the price fluctuation spread within each cluster by conducting leading actors. However, at certain time, the profit realizations start within the asset price increasing cluster, and then the capital emergent by the profit realization is used to invest assets in price decreasing cluster. Eventually, the two clusters find an equilibrium state.
In the complex network model of financial agents the interactions are modelled by the correlations of long time series [13][14][15][16][17][18][19]. Beside the other types of complex systems, financial systems have the strong memory and heredity properties. Therefore, while using differential equations in financial models it is much more useful to get fractional calculus involved. Fractional calculus is the extension of the integer order differential and integral operators to fractional orders [20,21]. The dynamic memory in a financial process can be defined as the averaged characteristic that describes the dependence of a process in the past. Such memory assumes withitness of financial agents about the history of the process. In formal way, the information on the state of the process {t, χ(t)} does not only affect the behavior of financial agents, but also the information about the process state {τ, χ(τ)} also has effect at τ ∈ [0,t]. This effect is related with the fact that the change of the factors can leads to different amount of change in indicators that is there exist multivalent dependencies among variables. One type of such memory of financial agents is called the fading memory and have range application area in physical sciences [22][23][24][25][26][27][28][29][30]. In this study, we assume that financial agents can remember the previous changes of investments and the impact of these changes on the output by following fading memory by using Caputo's definition of fractional derivative.
In this study, we present our model with fractional derivative as similar as the model describing biological species living together. This model of biological species is first given in [7] as the following usual integrodifferential equations: Several solution methods are also presented to study this model [31][32][33]. The characterization of the fractional order of the model is also studied in [34]. The rest of the paper is organized as follow: In Section 2, we present the preliminaries about to fractional calculus and graph theoretical concepts that we use throughout the paper. We start our analysis by first determining the financial agents in Section 3. The stock market we choose to study is Borsa Istanbul Stock Exchange Market (XU100). The agents are the companies operating in XU100 and expressed with the time series of the time span of working days from 2013 to 2015. Afterwards, we determine the clusters of financial agents that have strong correlations by using high modularity method. Afterwards we introduce the Adomian decomposition method for solving the system numerically. In Section 5, we present the results by solving the model with Adomian decomposition method. And finally, in Section 6, we deeply discuss the computational results.
Preliminaries
Fractional calculus is an efficient mathematical tool to express complex system phenomenon which involve memory effect. Hence, we use fractional derivatives and integrals to study the model we present in this study. In this section, we give some basics about the fractional calculus in Caputo sense. We also introduce some basics about the graph theory which is the fundamental tool for network modelling. Throughout the paper we let Γ to represent Gamma function that is an extension of the factorial function.
Fractional Calculus
The generalization of the integer order differentiation and integration to the fractional order is called fractional calculus [20,21]. The basic definitions and properties of fractional calculus theory is given as follows: [20] For f (x) ∈ C (a, b) and n − 1 < α ≤ n, the Caputo fractional derivative operator of order α is given as Throughout this paper, we denote the Caputo fractional derivative operator as C a D α t = D α a . We also let a = 0 since our formulation only involves the initial conditions as t = 0.
Definition 2.
[20] The Riemann-Liouville fractional integral operator of order α ≥ 0 of a function f is defined as The several properties of the Riemann-Liouville fractional integral operator can be found in [35][36][37]. Since the Caputo fractional derivative allows traditional initial and boundary conditions to be included in the formulation of the problem [38], we present our model in the sense of Caputo fractional derivative. By the introduction of the J α , the D α a can also be expressed as Also, the following two basic properties of the entwined relations among Caputo and Riemann-Liouville fractional operators are needed to present the solution of the fractional differential equations. and
Graph Theory
The real-world problems are often expressed with the relations of interacting individuals. One of the efficient mathematical tools to represent such relations is the simple graphs. In the Stock Market Networks, interactions of financial agents can be modelled by simple graphs. Let V be the set of the interacting individuals and E be the set of relations, then a simple graph is a tuple G = (V, E). Here we call V as the vertex set and E as the set of edges. The each element of E is the unordered pair of vertices as The number of edges incident to a vertex v is called as the degree of v and we denote the degree as d v .
A sequence of edges between the vertices v i and v j is called a path, and if there is a path between any vertices of the graph G, then G is called as connected. If there is an edge between all elements of V , then G is called as a complete graph. The k-clique of the graph G is the complete subgraph of G which involves k-many vertices of G.
For the simple graph G = (V, E) with the unordered edges, a binary matrix which has the entities as is used to represent the relations and called as adjacency matrix. A G is symmetric by definition. Now let |V | = n and D G be the diagonal degree matrix of G defined as The spectrum of the L G encodes structural properties of G. The one that we use in this study is helpful to construct a threshold network of the financial agents. All eigenvalues of L G are positive semi-definite with the least one 0. The multiplicity of the 0 eigenvalue equals the connected components number of G [39].
Several types of subgraphs also involve the information about the network which is expressed as a simple graph G = (V, E). One of them is the tree structure that has minimum weight. Such subgraphs are called as Minimum Spanning Tree (MST) and involve the junction vertices which are dominant in the flow of information and comes up with subdominant ultra-metric structure [40,41]. In the case of financial agents are the vertices of the network, MST gives the hierarchical structure of the financial network [42][43][44]. A planar graph is a simple graph that can be embedded on the plane, which is none of the graph edges intersect. Trees like planar graph that involve cliques are also useful to extract information about the network. Such tree like planar graphs have the same hierarchical structure as MST but they contain larger amount of information about the relation among the interacting agents [45][46][47]. In [45], authors present a method to obtain a planar graph with maximum non-crossing edges among the agents of a network and called it Planar Maximally Filtered Graph (PMFG).
Model
This study involves 93 companies that have been operating in Borsa Istanbul 100 Stock Exchange Index (XU100) from January 2013 to January 2015. The Pearson correlation coefficient of time series assumes the equality of the length of time series. Hence, even though XU100 has 100 operating companies, we only consider 93 of them which have the equal time length. Trading hours for the stocks are held by two sessions on business days with mid-day break, and one session in some official holidays [44,48]. The tickers of the companies operating in XU100 and that are considered in this study is given in Table 1. The more details on the data can be found in [44]. The data we use is available with sessional closure price, therefore we calculate sessional closure price logarithmic return as where P i (t) is the closure price of the i-th stock at the session t. To represent the relation between stock pairs, we use the Pearson correlation coefficient of stocks as , where < · · · > is the temporal average performed on the trading days. Pearson correlation coefficient varies between -1 and 1. ρ i j = 1 indicates that the stocks i and j are completely correlated whilst ρ i j indicates that the stocks i and j are completely uncorrelated [42]. Hence, it is also possible to introduce a new distance d Corr := 2(1 − ρ i j )/2 as in [13,44]. We can conclude that if d Corr (i, j) = 0, then the stocks i and j are completely correlated, and if d Corr (i, j) = 1, then the stocks i and j are completely uncorrelated.
This distances based on Pearson correlation is helpful to us for edge determination on the network. By using an empirical threshold value among the stocks, it is possible to determine edges representing strong relations as where T hV is the threshold value. The threshold value can be determined by the subdivision of the interval [0, 1], where the boundaries are the extremal values of d Corr , into h many subintervals. The details on the algorithm of network construction and computational complexities can be found in [13,44].
The model we proposed in this study first deal with the two clusters of financial agents of the network. In the literature, the cluster of densely connected vertices of a network is called graph communities [49,50]. This densely connection is internal and can be used to analyze the relations that are represented by edges on the network. There are several methods to detect communities in a network [51][52][53][54]. To find the graph communities in the network; we use the Modularity Maximization Method which is based on the maximizing the Newman modularity index [51] defined as where E k is the number of edges in the k-th module, N C is the total number of modules, m is the total number of edges and d j is the vertex degree. Since the resulting communities are non-overlapping and this method let us to determine final number of the communities, we choose it as an efficient tool. Now let us consider the two communities of financial agents with the total number of investments µ 1 (t) and µ 2 (t), respectively, at time t. Let us assume the investment in the first community is increasing with the coefficient of increase k 1 and in the second community is decreasing with the coefficient of decrease k 2 . Both coefficients k 1 and k 2 are positive reals. If the two communities are left separate; i.e. they are non-overlapping, then the fractional growth of the first can be represented by and the decline of the second community can be represented by The neoclassical liberal economy states that markets always look for the equilibrium state. Hence, if we put these two communities together in the corresponding stock market environment, the decrease of the rate of the increase of first community is proportional to µ 2 (t) and vice versa. Therefore, it is reasonable to assume the increase and decrease coefficient as and where γ 1 and γ 2 are the proportionality constants which depend on other investor behavior, respectively. The actual decrease and increase of the investments in the communities are due not only to the presence of the other community but also to all previous presences for the whole time interval t − T 0 < τ < t, where T 0 is the finite heredity duration of both communities. In addition to the present γ 1 and γ 2 factors, we may have the record of decrease as f 1 (τ) and increase as f 2 (τ). Therefore, by considering the heredity duration of both communities, the total decrease in k 1 in the time interval T 0 is the total fractional increase in k 2 is Now, by considering effective values of the k 1 and k 2 and the equations 3-8, the fractional model with fading memory of the equilibrium state of the two communities of financial agents in same stock market can be given as the system of fractional integro-differential equations as follows: where N 1 and N 2 are the initial conditions.
Method
In this section, we present the graph theoretical methods to determine parameters k 1 , k 2 , γ 1 , and γ 2 of the fractional integro-differential equation model in the initial value problem 9-11 and the numerical solution that is based on Adomian Decomposition Method.
Parameter Estimation
The parameters in network models can be estimated by using graph theoretical concepts. For the system 9-11, to determine the coefficients of increase and decrease, we use an interpretation of the displacement of a vertex in a network from its equilibrium state while the network is under a thermal bath. This thermal bath can be seen as the change of investment strategies on given network. This procedure is called vibrational potential and first presented in [55]. The later studies consider vibrational potential as an efficient measure to vertex centrality [56][57][58]. The main idea to compute the vibrational potential of a network is embedding vertices to n-dimensional Euclidean space by using the Moore-Penrose pseudo-inverse of the Laplacian, where n = |V |. Within the hierarchical structure of each community, each stock market tends to be adjacent to junction vertices. Therefore, the change of investment on the junction vertices directly affect the corresponding leaves. Therefore, we correspond the increase/decrease coefficients with vibrational potential of the network. However, instead of direct computing vibrational potential of the network, we compute vibrational potential of each vertex respect to its neighborhood graph.
For this purpose we present the vertex displacement in vibrational potential of a vertex respect to its neighboring vertices with where k is the spring constant, L N is the Laplacian of the neighboring graph G N of the vertex v in G, and x v is the vector whose i-th entry is the displacement of v. The mean displacement of the vertex v can be computed with the reverse temperature β as where the probability distribution P( x v ) is Similarly the displacement correlation of the vertices in same neighborhood can be defined as where < · · · > is the thermal average. Let 0 = λ N 1 < λ N 2 ≤ . . . ≤ λ N n be the spectrum of L N respect to eigenvalues λ N µ . Since the quantity respect to 0 eigenvalue is the center of mass, the 0 eigenvalue does not affect the vertex displacement. Then the integral measure can be transformed by 16) where U N is the matrix formed by the orthogonal eigenvectors of L N . By the introduction of this transform the new probability distribution can be obtained as where the diagonal matrix Λ N involves the eigenvalues λ N µ . Since 0 eigenvalue does not effect the vertex displacement, we can remove the component µ = 1 from the Equation 17, and the probability distribution can be computed as Hence, by using the probability distribution obtained in Equation 18, it is possible to compute the Equation 13 as where Therefore, the mean displacement of a vertex from its neighborhood can be computed as By the introduction of the Moore-Penrose pseudo inverse L N+ i j of L N as in [59,60], it is also possible to compute the mean displacement of a vertex from its neighborhood as We also note that the displacement correlation of the vertices in the same neighborhood that is given in the Equation 15 can be computed in the terms of Moore-Penrose pseudo inverse as Another parameters we need to estimate in the system 9-11 are γ 1 and γ 2 which are the proportionality values. The proportionality values control how much financial agent in the same community affect each other. Hence, they can be measured as how strong each agents are connected internally. This measurement is naturally arise from the PMFG of each communities. PMFG structure allows cliques which are the topological subgraph structure representing strong relations. Since PMFG also has the information about the hierarchical structure, it is reasonable to measure internal connectedness of communities by using PMFG. For this measurement we follow the way presented in [45]. The mean disparity measurement < y > of a PMFG can be defined as the mean of where i is the generic element of the clique and
Adomian Decomposition Method
It is well known that many nonlinear differential equations exhibit strange attractors and their solutions have been discovered to move toward strange attractors. If these strange attractors are examined deeply, it can be seen that these are fractals. Therefore, we aim to deal with fractal nonlinear differential equations rather than classical forms of them. Hence we shall extend the Adomian decomposition method to be used for solving fractional nonlinear equations. For the solution of the system 9-11, we use an efficient decomposition method for approximating the solution of systems of fractional integro-differential equation that are given in Caputo sense. The approximate solutions are calculated in the terms of a convergent series as in [34]. Now let us consider the system 9-11 with 0 < α ≤ 1. By following the decomposition idea we may state that This equations lead us to the integral equations Afterwards, the Adomian process will be as follows:
Results
In order to study the proposed model in the Borsa Istanbul Stock Exchange, we first apply our algorithm to the data set to obtain stock market network. For the fraction size h = 10000, the algorithm determines the control parameter as 0.6854. The vertices are sorted from 1 to 93 by the alphabetical order in Table 1. The formed network is presented in Figure 1 Hence it can also be concluded this network has strong internal connectedness. The correlation distance matrix of the agents are presented in Figure 2 with temperature mapping.
To determine the two non-overlapping clusters of financial actors we use the high modularity method. The resulting communities are presented in Figure 3. The agent numbers of each community give us the initial conditions as N 1 = 66 and N = 27. Fig. 2 The matrix of correlation distance among the financial agents of XU100. The darker points are closer to 1 while the lighter ones are closer to 0.
As aforementioned, parameters of the model described by the system 9-11 are obtained by vibrational potential respect to neighborhood graphs and mean disparity measures of each communities. We need to note that vibrational potentials of each vertices are tend to form internal clusters; i.e., some of them have higher values and some of them have lower values. Therefore, while determining k 1 and k 2 values, we choose the mean value of each vibrational potentials. Afterwards forming the PMFG of each community, it becomes possible to obtain disparity measures respect to 4-cliques, which are the representation of the strongest connections. The resulting parameters γ 1 = 0.3342 and γ 2 = 0.3388. The parameters are close to the value 1/3 which also states that the clustering method we choose is reasonable [45]. To interpret the results, we present MSTs and PMFGs of both two clusters in Figures 4-5 In the light of these computed parameters we may now state the system 9-11 with T 0 =100, f 1 (t) = f 2 (t) = e t as By applying the Adomian process obtained in Section 4.2, the solution of the initial value problem 37-39 can be obtained as with a three-term approximation. The plots of the solution functions (40)(41) are presented in Figures 6-15 for the different α values.
Conclusions
Ordinary differential equations are the most common mathematical tool to represent real world problems. But ordinary differential equations become less effective whenever the problem involves memory effect. Complex systems that representing financial agents have the memory effect, hence it is reasonable to model such systems by using the idea of fractional differential.
In this study, we propose a model which is represents the fractional interaction of financial agents. The interaction of the agents is determined within a complex network of a stock market. We express the model as a system of fractional integro-differential equations in Caputo sense. Hence, we keep the fading memory of the financial interaction. Our model considers two clusters of agents where one cluster tends to get investment flow. To determine the clusters we use maximization of the edge modularity in stock market network. The resulting clusters are consistent with the structure of Borsa Istanbul. Both MST and PMFG filtration of the clusters involve agents of Financials sector as the leading elements. To estimate the parameters of the model, we use graph theoretical concepts such as vibrational potentials and disparity measure of respected PMFGs.
By the computed parameters, we use Adomian decomposition method to obtain a solution of the model. This solution show us that for different fractional dimensions α, the model always reaches to an equilibrium state. For the lesser values of fraction rate α, agents reach to an equilibrium state relatively slower. Besides, the flows of investments tend to be in same characteristics. For the greater α values, agents reach to an equilibrium state faster and similarly the flows of investments tend to be in same characteristics. The model keeps the memory of the investment in best for 0.4 ≤ α ≤ 0.6. This results shows us that the fractional interaction of financial agents is consistent with reality when autocorrelations are discarded.
As neoclassical liberal theory of economics states, markets always seek for an equilibrium state. Hence, the model we present with fractional derivative is consistent with the real data of Borsa Istanbul Stock Exchange Market. We also believe that these kind of models can provide useful information for understanding and prediction of the global economic crisis. | 6,386.8 | 2020-01-01T00:00:00.000 | [
"Mathematics",
"Business"
] |
Modulation of endoplasmic reticulum stress via sulforaphane-mediated AMPK upregulation against nonalcoholic fatty liver disease in rats
Nonalcoholic fatty liver disease (NAFLD) is a major health concern. Endoplasmic reticulum (ER) stress, inflammation, and metabolic dysfunctions may be targeted to prevent the progress of nonalcoholic fatty liver disease. Sulforaphane (SFN), a sulfur-containing compound that is abundant in broccoli florets, seeds, and sprouts, has been reported to have beneficial effects on attenuating metabolic diseases. In light of this, the present study was designed to elucidate the mechanisms by which SFN ameliorated ER stress, inflammation, lipid metabolism, and insulin resistance — induced by a high-fat diet and ionizing radiation (IR) in rats. In our study, the rats were randomly divided into five groups: control, HFD, HFD + SFN, HFD + IR, and HFD + IR + SFN groups. After the last administration of SFN, liver and blood samples were taken. As a result, the lipid profile, liver enzymes, glucose, insulin, IL-1β, adipokines (leptin and resistin), and PI3K/AKT protein levels, as well as the mRNA gene expression of ER stress markers (IRE-1, sXBP-1, PERK, ATF4, and CHOP), fatty acid synthase (FAS), peroxisome proliferator–activated receptor-α (PPAR-α). Interestingly, SFN treatment modulated the levels of proinflammatory cytokine including IL-1β, metabolic indices (lipid profile, glucose, insulin, and adipokines), and ER stress markers in HFD and HFD + IR groups. SFN also increases the expression of PPAR-α and AMPK genes in the livers of HFD and HFD + IR groups. Meanwhile, the gene expression of FAS and CHOP was significantly attenuated in the SFN-treated groups. Our results clearly show that SFN inhibits liver toxicity induced by HFD and IR by ameliorating the ER stress events in the liver tissue through the upregulation of AMPK and PPAR-α accompanied by downregulation of FAS and CHOP gene expression.
Introduction
Nonalcoholic fatty liver disease is defined as a fatty change (steatosis) affecting greater than 5% of hepatocytes, enlargement of the liver (hepatomegaly), and inflammation (steatohepatitis) (Cobbina and Akhlaghi 2017). The prevalence of NAFLD is increasing all over the world, especially in Western countries (Yu et al. 2019). Therefore, NAFLD has the potential to be the most common cause of chronic liver disease in the near future. NAFLD does not have any clinical drug therapy. A variety of stressors have the potential to destroy organs or signal distribution routes, leading to genetic anomalies, functional impairment, and/or diseases. According to recent research, ionizing radiation and being overweight bring serious health risks . Meanwhile, the liver is easily influenced by many environmental conditions, such as ionizing radiation. Living organisms exposed to relatively high-dose radiation can sustain severe damage or die within a short period due to acute effects (Chiba et al. 2002). Once gamma radiation strikes the body, it induces ionized particles and excitation in the tissues, disrupting cellular functions. As a consequence, physiological destruction to the body will take place, with the lethality of this injury defined by a variety of factors such as the type and energy of the radiation, the overall dosages and dose rate, the impact and body part subjected, the aging process exposed to radiation, and the radiation responsivity of the organ exposed (Reisz et al. 2014). The effects of radiation on the liver can be impacted by lifestyle factors such as obesity, nutrition, and alcohol consumption, all of which are linked to a variety of liver diseases (Akiba and Mizuno 2012). Mechanisms underlying the basis of radiation impacts on the liver will lead to a variety of applications that will make therapy more effective. Appropriate agents promote anti-inflammatory or anti-oxidative properties in the liver, potentially lowering metabolic disorders as well as the negative consequences of radiation exposure (Nakajima et al. 2018).
The molecular mechanisms underlying fatty liver are not fully understood. Dysregulation of hepatic lipid homeostasis caused by pathological conditions such as reduced fatty acid oxidation, enhanced de novo lipogenesis, elevated hepatic fatty acid influx, and/or increased systemic insulin resistance is thought to be important in the development of the fatty liver. Indeed, therapies aimed at reducing body weight and/or alleviating insulin resistance reduce the fatty liver. AMP-activated protein kinase (AMPK) is an intracellular fuel sensor important in the regulation of lipid metabolism. In the liver, activation of AMPK leads to increased fatty acid oxidation and simultaneously to decreased lipid synthesis. Of interest, antidiabetic drugs, including metformin and the thiazolidinediones, alleviate fatty liver in humans and rodents by regulating lipid metabolism through AMPK activation. Thus, AMPK represents an attractive target for therapeutic intervention in the treatment of hepatic disorders (Shen et al. 2013).
Evidence suggests that ER stress plays an important role in the development of NAFLD. As a cytoplasmic organelle, the endoplasmic reticulum (ER) is responsible for folding, synthesizing, and modifying proteins. If ER cannot carry out its functions, unfolded or misfolded proteins accumulate in the cell. This pathological situation is called ER stress. This stress triggers the response known as the "unfolded protein response" (UPR) in the cell. With this response regulated by chaperones, the proper folding capacity of proteins is enhanced and the accumulation of misfolded proteins is prevented to some extent. UPR is carried out through major chaperones called glucose-regulating protein 78 kDa (GRP78) and three ER transmembrane receptors, including activating-transcription factor 6α (ATF6α), inositolrequiring enzyme 1 α (IRE1α), and protein kinase-R-like ER kinase (PERK). Each of these proteins activates specific pathways that enhance the accurate folding capacity of proteins and accelerate the degradation of misfolded proteins (Xu et al. 2005).
Endoplasmic reticulum stress-induced IRE1α increases the synthesis of X-box-binding protein-1 (XBP-1), which is associated with many regulatory pathways. Evidence suggests that ER stress plays an important role in the development of NAFLD (Li et al. 2012;Pagliassotti 2012). The modulation of ER stress is important in the treatment of this disease. The modulation of ER stress is important in the treatment of this disease. Sulforaphane (SFN) derived from the hydrolysis of glucoraphanin has been reported to have important medicinal value. SFN has been reported to exhibit antioxidant, neuroprotective, and anticancer properties (Guerrero-Beltran et al. 2012;Liang and Yuan 2012). It has also been reported that SFN has the potential to fight obesity by activating the AMPK signaling pathway (Lee et al. 2012;Choi et al. 2014;Yao et al. 2015). Although several studies have investigated the anti-obesity properties exerted by SFN, the molecular mechanism underlying the protective role of this compound on lipotoxicity and glucotoxicity in NAFLD remains unknown. In the present study, we exploited a rat model of NAFLD to evaluate the effect of SFN on the highfat diet (HFD) and/or ionizing-irradiation-induced hepatic oxidative damage. The study also aimed to determine the mechanisms responsible for the therapeutic effect of SFN by investigating the expression of genes related to ER stress, lipid metabolism, and insulin resistance.
Materials
Sulforaphane was obtained from Source Naturals (Scotts Valley, CA, USA). All other chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA).
Irradiation process
Whole-body ionizing irradiation (IR) was performed at the National Centre for Radiation Research and Technology (NCRRT, Cairo, Egypt) using Canadian gamma cell-40 ( 137 Cesium) at a dose rate of 0.67 Gy min −1 for a total dose of 6 Gy (Ramadan et al. 2001).
Animals
Thirty male albino Wistar rats were provided from the breeding unit of the Egyptian Holding Company for Biological Products and Vaccines provided (Giza, Egypt). The rats were 5 weeks old, weighed 130 to 150 g, and were kept in conventional cages (six animals per cage). The animals were kept in an air-conditioned (25 ± 2 °C) environment with unrestricted access to water and a regular laboratory feed (El-Nasr Co. Cairo, Egypt). They were also treated to a 12:12-h light-dark cycle. All the experimental procedures were carried out according to the principles and guidelines of the Ethics Committee of the National Research Centre conformed to the "Guide for the care and use of Laboratory Animals" for the use and welfare of experimental animals, 1 3 published by the US National Institutes of Health (NIH publication No. 85-23, 1996).
Experimental design
After a 1-week acclimation period, some of the animals were fed normal chow, while others were fed HFD for 8 weeks. HFD was provided by El-Nasr Co. (Cairo, Egypt), which comprised 50% carbohydrates/starch, 27% fat, 10% protein, 10% sucrose, 1.5% fiber, and 1.5% vitamins. Rats were allocated and randomly divided into five equal groups with 6 rats each to enable differences in treatment to be determined with statistical significance (p < 0.05) as determined using the G-power statistical program.
The animal groups were as follows: Group 1 (control): rats receiving standard chow. Group 2 (HFD): rats were fed with HFD. Group 3 (HFD + SFN): HFD rats were treated with SFN orally by gavage at daily doses of 10 mg/kg B. W dissolved in distilled water (Tian et al. 2021) for 4 weeks.
Group 4 (HFD + IR): rats were fed with HFD, and exposed to fractionated doses of IR (3 × 2 Gy) up to a total exposure of 6 Gy (Kumar et al. 2017) to exasperate metabolic syndrome.
Group 5 (HFD + IR + SFN): HFD rats were exposed to IR as in group 4, and treated with SFN as in group 3.
Throughout the experiment, the body weight of the rats was recorded, before and after the administration of drugs. Blood samples were collected via retro-orbital bleeding into tubes. Fasting blood glucose was estimated then at − 80 °C and the blood aliquots were stored for further analysis. Then rats were sacrificed by cervical dislocation and liver samples were collected and divided into two portions; the first one was stored at − 80 °C for assessment of oxidant/antioxidant biomarkers, pro-inflammatory mediators, and gene expression. Other portions of liver samples were fixed in 10% neutral formalin and prepared for histopathological examination.
Biochemical analyses
Using kits supplied by BIOMED, the activities of serum alkaline phosphatase (ALP), alanine aminotransferase (ALT), and aspartate amino transaminase (AST) were assessed. The free fatty acid assay was performed in serum with BioVision's Free Fatty Acid Quantification Colorimetric Kit. A commercial kit (BIOMED, Cairo, Egypt) was used to measure serum glucose levels. Total cholesterol (TC), total triglycerides (TG), low-density lipoproteins (LDLc), and high-density lipoprotein (HDL-c) in serum were measured using commercial kits from RANDOX Reagents (USA). Lipid peroxidative products were measured using the thiobarbituric acid test for malondialdehyde (MDA) in liver tissue, as described by Satoh (1978). Superoxide dismutase (SOD) activity was determined in liver tissue spectrophotometrically (Marklund 1992). Reduced glutathione (GSH) contents were measured in liver tissue according to the method of Ahmed et al. (1991), while the activity of the catalase enzyme was determined in liver tissue according to Aebi (1984). Serum IL-1β concentration was assayed by using ELISA kits for rats (Glory Science Co., Ltd., Del Rio, TX, USA). Rat Phosphotylinosital 3 Kinase (PI3K) and rat phospho-AKT (Ser473) in liver tissue were estimated using MyBioSource (San Diego,CA 92,. Also, insulin serum levels were evaluated (Merck Millipore, Billerica, MA, USA) using an enzyme-linked immunosorbent assay (ELISA). Serum resistin and leptin were measured using an ELISA kit (R&D Systems). To estimate the homeostasis model assessment for insulin resistance (HOMA-IR), the following equation was used, fasting insulin × fasting glucose/405 (Roza et al. 2016).
Detection of gene expression by real-time quantitative polymerase chain reaction (PCR)
Isolation of RNA and reverse transcription The mRNA expression of activating transcription factor 4 (ATF4), inositol-requiring enzyme-1 (IRE-1α), adenosine monophosphate-activated protein kinase (AMPK), spliced X-box binding protein 1 (sXBP1), protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), peroxisome proliferator-activated receptor-alpha (PPAR-α), fatty acid synthase (FAS), and C/EBP-homologous protein (CHOP) were examined. Using the TRIzol reagent (Life Technologies, USA) according to the manufacturer's instructions, total RNA was isolated from 30 mg of liver tissues. Agarose gel electrophoresis (1%) was used with ethidium bromide staining to confirm the integrity of RNA. Synthesis of the first-strand complementary DNA (cDNA) was achieved with reverse transcriptase (Invitrogen) using 1 μg total RNA as the template, according to the manufacturer's protocol. RT-PCRs were performed using the Sequence Detection Program (PE Biosystems, CA) in a thermal cycler stage one plus (Applied Biosystems, USA). A 25-μL total volume reaction mixture consisted of 2X SYBR Green PCR Master Mix (Applied Biosystems), 900 nM of each primer, and 2 μL of cDNA. The conditions for PCR thermal cycling included an initial step at 95 °C for 5 min, 40 cycles at 95 °C for 20 s, 60 °C for 30 s, and 72 °C for 20 s. At the end of the reaction, a curve analysis was conducted. Using the Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) gene that was amplified in each series of PCR experiments, the results were normalized. The relative expression of target mRNA was determined using the method of comparative Ct described by Livak and Schmittgen (2001) (Table 1).
Histopathological study
Liver tissue specimens from all animal groups which were carried out on five rats were obtained and preserved in 10% neutral buffered formalin. The specimens were then trimmed, washed, and dehydrated in ascending concentrations of alcohol, cleaned in xylene, embedded in paraffin, sectioned at 5-6-μm thickness, and stained with hematoxylin-eosin staining (H&E) according to Banchroft et al. (1996). The frequency and severity of lesions in the livers were evaluated semi-quantitatively, as reported earlier by Plaa (1982), using a scale where grade 0: no apparent injury, grade I: hepatocyte swelling, grade II: hepatocyte ballooning, grade III: lipid small bubbles in hepatocytes, and grade IV: hepatocellular apoptosis and necrotizing.
Statistical analysis
The data was analyzed and significance tests were conducted using the statistical software SPSS (Statistical Program for Social Science) version 20.0, which includes a one-way ANOVA test followed by a post hoc test for multiple comparisons. All data is given as a mean of six values, with SE and the difference between means deemed significant if the difference is < 0.05.
Effect of SFN on the body weight and liver injury of rats fed on HFD and/or exposed to IR
The body weight of rats in each of the five experimental groups was monitored to assess the effects of SFN on obesity. During the experiment, the body weight of rats in the HFD and HFD + IR groups increased compared with that in the control group. SFN treatment reduced this body weight gain in the treated groups (HFD + SFN and HFD + IR + SFN) compared with that in the HFD and HFD + IR groups respectively (Fig. 1). Similarly, liver enzymes (ALT, AST, and ALP) are enzymes released from hepatocytes to the blood upon liver damage; these enzymes are considered the most sensitive indices reflecting liver cell damage. In contrast to the control group, the HFD group had a significant increase, but HFD + IR had higher levels in the serum activities of ALT, AST, and ALP. However, SFN supplementation ameliorated this damage as indicated by a Effect of SFN on body weight of animals in rats fed with an HFD and/or exposed to IR. Each bar represents mean ± SEM (n = 6). a Significant difference versus control group, b Significant difference versus HFD group, c Significant difference versus HFD + IR group.
(p < 0.05, ANOVA) significant decrease in the ALT, AST, and ALP activities in the intervention groups (HFD + SFN and HFD + IR + SFN) compared with those in the HFD and HFD + IR model groups respectively (Table 2).
Effect of SFN on the metabolic parameters of rats fed on HFD and/or exposed to IR
To determine the effect of SFN on lipid metabolism, the levels of TG, TC, LDL-c, and FFA in the serum of the studied groups were measured. When rats subjected to HFD and/or IR were compared to those fed a normal diet, serum levels of TG, TC, LDL-c, and FFA increased significantly, while HDL-c levels decreased significantly. However, SFN supplementation to rats fed on HFD and/ or exposed to IR displayed significant decreases in TG, TC, LDL-c, and FFA along with elevations in HDL-c levels compared to the animals in the HFD and HFD + IR groups. Meanwhile, to investigate whether treatment with SFN affects glucose metabolism, serum was collected to determine the levels of fasting blood glucose and fasting insulin. The HOMA-IR index was calculated based on the levels of fasting blood glucose and fasting insulin. A significant increase in the serum levels of fasting blood glucose and fasting insulin, as well as in the HOMA-IR index, was observed in the rats in the HFD and HFD + IR groups compared to the rats in the normal control group. In contrast, the levels of glucose and insulin, as well as the HOMA-IR index, were significantly reduced following SFN administrations in the HFD and HFD + IR + SFN groups (Table 3). This suggested that SFN could regulate lipid and glucose metabolism.
Effect of SFN on hepatic oxidative stress of rats fed on HFD and/or exposed to IR One of the most notable features of NAFLD is oxidative stress. Therefore, the effect of SFN on oxidative stress was assessed. Oxidative stress condition was assessed by determining the levels of MDA, SOD, CAT, and GSH in the In the present study, a significant increase in MDA level in the HFD rat group and a higher elevation in the HFD + IR group compared to the control group. On the contrary, significant decreases in the MDA concentration were observed in HFD + SFN and HFD + IR + SFN groups compared to the HFD and HFD + IR groups respectively. Meanwhile, SOD, CAT, and GSH are naturally produced cellular antioxidants which are accountable for decreasing oxidative stress. The activities of SOD and CAT and GSH content were significantly reduced in HFD and HFD + IR groups, in comparison with the control group. The intake of SFN to HFD and HFD + IR groups significantly restored all these antioxidant enzyme activities compared to the HFD and HFD + IR groups respectively (Table 4).
SFN declines the pro-inflammatory cytokine and adipokines of rats fed on HFD and/or exposed to IR
Another important stage of NAFLD is the inflammatory response. Therefore, the effect of SFN on HFD and/or exposure to IR-induced inflammatory response was measured. Pro-inflammatory adipokines (leptin and resistin) and proinflammatory cytokines (IL-1 β) were considered as potential metabolic syndrome serum markers. Hence, their levels were analyzed in the serum of control and experimental rats.
The obtained data showed a significant increase in leptin, resistin, and IL-1β concentrations in the HFD, and highly elevated in the HFD + IR groups. However, treatment with SFN restored the altered levels of adipokines in rats fed on HFD and/or exposed to IR (Table 5).
Effects of SFN on the expression of adipogenesis regulators in the liver of rats fed on HFD and exposed to IR
To further understand how SFN improves fatty liver disease, we evaluated the expression of key genes encoding proteins that function in lipogenesis (FAS) fatty acid oxidation and (PPAR-α and AMPAK). The qRT-PCR results revealed that FAS gene expression increased significantly accompanied by a marked decrease in the mRNA expressions of PPARα and AMPAK in rats fed with HFD and/or exposed to IR compared to normal control. Interestingly, the administration of SFN to HFD and HFD + IR groups significantly regulated the expression of PPAR-α, AMPK, and FAS compared to HFD or HFD + IR groups respectively (Fig. 2).
SFN regulated the expression of PI3K and Akt of rats fed on HFD and exposed to IR
Accumulating evidence indicates that dysregulation of the PI3K/AKT pathway in hepatocytes is a common molecular event associated with metabolic dysfunctions including the NAFLD and the pathogenesis of insulin resistance. The results of PI3K and Akt protein in liver tissue showed a significant decrease in the PI3K/AKT protein concentrations in the HFD and more reduction in the HFD + IR groups when compared to the control group. Additionally, the results showed a significant increase in these protein concentrations in the HFD + SFN and HFD + IR + SFN groups in comparison with the HFD and HFD + IR groups respectively. These results indicated that SFN could exert its protective effect against HFD and/or IR-induced hepatic damage by downregulation of the PI3K/Akt signaling pathway (Fig. 3).
Effect of SFN on the expression of hepatic ER Stress biomarkers and CHOP of rats fed on HFD and/ or exposed to IR
The ER stress markers (ATF4, IRE-1α, PERK, sXBP-1, and CHOP) are crucial proteins involved in the pathogenesis of inflammation and insulin resistance; hence, we investigated their gene expression in the liver of experimental rats. In the present study, rats fed on HFD and exposed to IR exhibited high gene expression of IRE1, PERK, ATF4, and sXBP-1 compared to control, whereas the expression of tissue CHOP was significantly downregulated. On the other hand, the SFN supplementation to rats fed on HFD and/or exposed to IR significantly regulated the expressions of ATF4, IRE-1α, PERK, sXBP-1, and CHOP compared to HFD and HFD + IR groups (Fig. 4).
Histology finding
Photomicrographs of normal liver samples are shown in Fig. 5a-f. Photomicrographs of the liver from rats fed on a high-fat diet showed the presence of many vacuolated areas, many fat cells, and dilatation of the central vein. The liver of an HFD rat subjected to gamma-rays (NAFLD model) revealed an increase in the appearance of many vacuolated areas, as well as hydropic and fatty degeneration. The liver of HFD rats treated with SFN showed a reduction in fat cells and less dilatation of the central vein. In comparison to the control group, NAFLD rats treated with SFN showed some normalization, with the fewest intracellular micro-vesicular steatosis, undamaged architecture, and no inflammatory foci as described in Table 6.
Discussion
The results originating from this investigation confirm the ability of SFN to alleviate hepatic steatosis and liver damage in rats chronically fed HFD, which supports some previous studies conducted in the NAFLD model (Wu et al. 2021;Li et al. 2021). However, the molecular mechanisms underlying the beneficial effect of SFN in the treatment of NAFLD remain controversial. The novelty of our data is that they are the first to show that this protection is mediated, at least by activating AMPK, which subsequently ameliorated hepatic steatosis and oxido-inflammatory damage by regulating ER stress, and lipid metabolism, and glucose homeostasis.
In the present study, SFN administration significantly reduced the body weight gain in the HFD and HFD + IR groups. To evaluate whether the hepatic steatosis was induced in this model, we first investigated the liver enzymes (ALT, AST, and ALP). Liver enzymes are usually used as a sign of liver impairment and as surrogate diagnostic markers for NAFLD, apart from liver biopsy as a gold standard for showed the normal appearance of the hepatic cord radiated from the central vein (arrow) (grade 0), HFD + IR group image (e, f) shown the appearance of many vacuolated areas (arrow), hydropic and fatty degeneration (grade IV) (NAFLD liver), and HFD + IR + SFN group image (g) treatment markedly attenuated the histopathological characteristics of NAFLD observed in the model group (grade I) NAFLD diagnosis. Any damage to the hepatocytes increases the activity of these enzymes in the liver before being transported into the bloodstream, and thus increases the levels of the enzyme in the serum (Eliades et al. 2013). The data showed that feeding rats with HFD and/or HFD + IR groups caused remarkable increases in the liver enzymes compared to those in the control group; on the other hand, SFN administration significantly rescued the markedly increased serum levels of ALT, AST, and ALP in HFD and HFD + IR groups. Nonalcoholic fatty liver (NAFL) is mainly characterized by fat deposition in hepatocytes, visible under light microscopy as small droplets inside the cytoplasm. Thus, therapy based on reducing lipid accumulation is ideal for treating NAFLD (Wang and Malhi 2018). The previous study suggests that insulin resistance status is highly related to the alteration of lipid mechanisms, accompanied by reduced serum HDL as well as increased LDL and TG levels. In the present study, a significant increase was observed in serum TG, TC, LDL-c, and FFA, accompanied by a significant decrease in HDL-c compared to both groups of HFD and HFD + IR groups. Interestingly, SFN administration in the HFD + IR group significantly reversed all these undesirable changes, which is in agreement with previous findings . Moreover, the reduction in the lipid profile in the serum of rats treated with SFN (HFD + SFN and HFD + IR) was associated with downregulation in the gene expression of FAS, confirming that SFN could reduce the fatty acid chain synthesis by suppressing FAS expression. The antihyperlipidemic activity of SFN was supported by the study of Lei et al. (2019), who suggested that the anti-hyperlipidemic property of SFN might be attributed to its lipolysis activity by transcriptionally upregulating adipose triglyceride lipase and hormone-sensitive lipase in HHL-5 cells.
The exact pathogenesis of NAFLD is still unknown, but accumulating evidence has indicated important roles for oxidative stress, insulin resistance, endoplasmic reticulum stress, and chronic inflammation, and these factors always interact with each other and finally lead to the occurrence and development of NAFLD. Oxidative stress is often initiated by abundant production of ROS and is considered an important contributor to hepatocyte injury associated with NAFLD (Li et al. 2018). This is in line with our findings in which significant increases in the oxidative stress markers such as MDA were accompanied by decreased activities of catalase, SOD, and GSH content in the HFD and HFD + IR groups compared to the control group, suggesting the presence of oxidative stress that may play an essential role in the insulin resistance in the HFD and HFD + IR groups. In this context, the significant increase in liver MDA could be due to the interaction of ionizing radiation with water, which produces a variety of reactive oxygen species (ROS), including hydroxyl radical (•OH), hydrogen peroxide (H2O2), superoxide radical (•O 2 ), and subsequently oxygen (O 2 ) as an attack on the fatty acid composition of membrane lipids (Hassan et al. 2021). Nevertheless, SFN treatment reduced the oxidative stress in rats fed on HFD and/or exposed to an IR, as detected by the activation of key enzymes involved in the balance of the redox state, such as catalase, SOD, and GSH. Considering the role played by SOD and catalase in the protection of cells against oxidative damage, the increased activity of these enzymes following SFN treatment suggests a decreased hepatic oxidative stress and insulin resistance in SFN-treated rats (Wang and Chan 2006).
Meanwhile, insulin resistance is one of the variables in the metabolic syndrome associated with NAFLD and it is defined as a decrease or insufficient insulin sensitivity in the target tissues, such as muscle, adipose tissue, and liver, towards glucose uptake from the blood (Petersen and Shulman 2018). Previous data showed that HFD-fed animals demonstrated a reduction in insulin sensitivity (Kuipers et al. 2019;Meijer and Barrett 2021). This is attributed to the excessive free fatty acids derived from HFD, which inhibit insulin binding, degradation, and function and hence cause a decrease in glucose uptake from the blood (Ter Horst et al. 2017). The HOMA-IR index is used to assess systemic insulin resistance, and higher HOMA-IR values indicate a higher degree of insulin resistance. Our findings showed that rats in the HFD and HFD + IR groups demonstrated insulin resistance as indicated by significantly elevated values of HOMA-IR relative to the control group, which was confirmed by the previously reported studies. However, the elevated HOMA-IR index values that were associated with HFD administration were restored to normal control levels after SFN treatment.
Current research believes that insulin resistance plays an important role in the pathogenesis of NAFLD. The PI3K/ Akt signaling pathway is one of the main downstream pathways of insulin, and Akt is the key signaling transduction molecule in the PI3K pathway. Physically, insulin induces the upstream activation and then Akt phosphorylation, which further mediates glycogen synthesis, glycolysis, glucose transporter, protein synthesis, and lipid synthesis. Also, some research demonstrated that Akt could directly inhibit the gene expression of fatty acid oxidation and thus regulate liver lipid metabolism. Although much evidence indicates that activation of the PI3K/AKT pathway is associated with marked accumulation of intracellular lipid droplets and promotion of NASH to fibrosis, some studies have revealed that PI3K/Akt activation is beneficial for ameliorating insulin resistance, oxidative stress, and lipid accumulation (Li et al. 2018). In the current study, HFD and/or ionizing radiation exposure inhibited the activity of PI3K/Akt proteins, which in turn induced significant insulin resistance, as detected by the significant increase in the HOMA-IR index. Additionally, studies also demonstrated that some natural products ameliorated NAFLD by regulating the PI3K/Akt pathway (Matsuda et al. 2013;Li et al. 2018), which was in accordance with our results that SFN could reduce insulin resistance by activating the PI3K/Akt phosphorylation.
Furthermore, previous studies have presented various concrete data on the role of adipose tissue as a key endocrine organ that mediates the metabolic activities of the brain, muscle, and cardiovascular system (Antuna-Puente et al. 2008;Kamada et al. 2008). The adipocytokines such as IL-1β, leptin, and resistin released by the adipocytes control insulin sensitivity and inflammation, which also take part in the pathogenesis of NAFLD and its progression to NASH (Fjaere et al. 2019;Silva et al. 2020). In the present study, we showed that HFD feeding and IR exposure significantly increased the levels of IL-1β, leptin, and resistin compared to the control group. In contrast, SFN administration significantly reduced this parameter as well as increased insulin sensitivity, which is in accordance with previous studies reported in animal and clinical trials (Kujawska-Luczak et al. 2018;Suleiman et al. 2020).
Dysfunction of the ER, the main cellular compartment involved in secretory and transmembrane protein folding, calcium homeostasis, and lipid biogenesis, is involved in metabolically driven NAFLD pathologies through the activation of ER stress signaling, during which the expression of FAS is upregulated (Ashraf and Sheikh 2015;Huang et al. 2010). In this study, the expression of IRE-1α, sXBP1, PERK, ATF4, and CHOP in response to ER stress was markedly increased in HFD and HFD + IR, but SFN treatment significantly reversed the increased activation of this gene expression. Taken together, the results reveal that SFN treatment regulates lipid metabolism by suppressing the ER stress in NAFLD. We investigated the mechanism underlying the attenuation of lipid metabolism disorders by SFN through inhibition of ER stress. AMPK, as an energy sensor, contributes to keeping cellular energy homeostasis (Hardie 2008). Activated AMPK abolishes the lipid synthesis process and reduces TG production in the liver (Yang et al. 2014). Previous research indicates that activation of AMPK inhibits ER stress (Kim et al. 2015) and lipogenesis and stimulates fatty acid oxidation by inhibiting the expression of lipid metabolismrelated proteins (Zhou et al. 2017). FAS has been identified as an important target of AMPK. Phosphorylation of AMPK may downregulate the expression of FAS, ultimately leading to the inhibition of lipid synthesis (LiY et al. 2011). Moreover, AMPK has been reported to positively regulate FA oxidation by activating PPARα (Lee et al. 2006;Choi et al. 2014). Additionally, there is evidence to indicate that AMPK activates the PI3K/Akt pathway by inhibiting the phosphorylation of insulin receptor substrate-1 (Zheng et al. 2015).
Consistent with a previous study by Docrat et al. (2018), the livers of HFD and HFD + IR rats in the present study exhibited a significant downregulation in AMPK gene expression compared with the control group. The PI3K/Akt levels, as well as PPARα gene expression levels, were also reduced in the model groups compared with the control. However, SFN treatment for 4 weeks significantly reversed the decreased levels of AMPK, PI3K/Akt, and PPAR-α, indicating that SFN can regulate the expression of AMPK and its downstream targets.
Overall, the results of the present study suggest that SFN could effectively prevent the progression of NAFLD in a rat model, as evidenced by its ability to attenuate the HFD and/ or ionizing radiation-induced increases in serum levels of liver enzymes, lipids, glucose homeostasis, and inflammatory adipokines. Mechanistically, the results also show that SFN regulated lipid metabolism, insulin resistance, and ER stress in the liver via the AMPK-dependent upregulation of PPAR-α, PI3K/AKT, and their target proteins. In conclusion, the key finding from the present study was that SFN reduced body weight and covalently inhibited the chaperones' activity and could disconnect the transduction of ER stress signals from an inflammatory response and lipid metabolism. It can be concluded that sulforaphane could work as an antioxidant and anti-inflammatory agent, stressing the need for an SFN agent in the management of obesity and irradiated patients to protect or at least mitigate the therapy's side effects. | 7,593.6 | 2022-07-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Standards-based curation of a decade-old digital repository dataset of molecular information
Background The desirable curation of 158,122 molecular geometries derived from the NCI set of reference molecules together with associated properties computed using the MOPAC semi-empirical quantum mechanical method and originally deposited in 2005 into the Cambridge DSpace repository as a data collection is reported. Results The procedures involved in the curation included annotation of the original data using new MOPAC methods, updating the syntax of the CML documents used to express the data to ensure schema conformance and adding new metadata describing the entries together with a XML schema transformation to map the metadata schema to that used by the DataCite organisation. We have adopted a granularity model in which a DataCite persistent identifier (DOI) is created for each individual molecule to enable data discovery and data metrics at this level using DataCite tools. Conclusions We recommend that the future research data management (RDM) of the scientific and chemical data components associated with journal articles (the “supporting information”) should be conducted in a manner that facilitates automatic periodic curation. Graphical abstract Standards and metadata-based curation of a decade-old digital repository dataset of molecular information.
Background
Research data repositories based on platforms such as DSpace [1] were introduced about 10 years ago, and their use in domains such as chemistry and molecular sciences has gradually increased [2,3]. Their importance has recently come to the fore with funding agencies in the USA, Europe and Asia all indicating that open deposition of research data will become a mandatory aspect of their funding, and many universities are now starting to consider the implications of implementing research data management, or RDM [4][5][6]. An early example of such RDM is illustrated with a project to produce a library of quantum-mechanically-optimised molecular coordinates derived from a computable subset of the National Cancer Institutes (NCI) collection of small molecules [7]. The information for each molecule was originally annotated by optimising the coordinates with respect to the energy obtained using the semi-empirical PM5 parameter set in MOPAC [8] (then the most current parameter set) and creating a DSpace collection. At the commencement of the present project, the original deposition of this information for 175,356 molecules into the institutional repository of the University of Cambridge [9] represented the only openly accessible copy.
An issue frequently raised in the context of research data management relates to the prospects of being able to access and use such digitally held information in the future. Relatively recently, such questions were largely directed towards the expected longevity of physical media such as punched cards and floppy disks (both now effectively extinct), hard drives, CDROMs, DVDs, magnetic tape etc. Few of these media have proven lifetimes exceeding 20 years and the real problem would be locating working devices capable of reading such physical media in the future. Quite different problems are associated with virtual collections, where the physical medium is less important than the information associated with the data itself. In this context, it is becoming increasingly accepted that successful long-term preservation of digital data depends upon repeated incremental improvements or curations taking place in 5-10 year cycles. Such operations can in principle be repeated indefinitely, thus creating a long-term mechanism with an anticipated lifetime of 100+ years if required. These curation cycles can track the evolution of data storage hardware, data formats and introduction of new software, so ensuring that the data remains accessible and in a usable form. The purpose of this project was to explore the viability of the longterm preservation of the 10 year old Cambridge dataset through such an incremental curation by performing its migration to the SPECTRa repository hosted at Imperial College London [2]. Specific benefits of undertaking such a curation include re-filtering the original source data for errors not previously eliminated, to produce an enhanced metadata record for each entry, and to recompute the optimised molecular coordinates by using the newer PM7 method. The original PM5 method used to obtain the molecular geometries was never formally published and is now unavailable, whereas the succeeding PM7 method has been formally peer reviewed and published [10].
We will also compare our approach with two other examples drawn from computational chemistry. The first [11] is typical of how almost all datasets derived from molecular computations are currently curated; in this case the stochastic generation of all possible stable molecular structures from an initial set of specified atoms. The trend in scientific publication in recent years has required authors reporting such studies to include more extensive data in the form of supporting information (SI) to accompany the scientific narrative from which their models are constructed and their conclusions drawn. We will argue here that these SI-based mechanisms for depositing, retrieving and re-using the data components of journal articles are no longer fit for this purpose (if indeed they ever were) and should be urgently replaced by repositories of data and closely-coupled metadata as a fundamentally different model for research data management. The second example describes [12] such a deposition of a dataset containing the quantum mechanically computed structures and properties of 134,000 molecules into the Figshare digital repository. We will ask here what the attributes of such a deposition must be in order to enable efficient formal re-curation 10 years after the original creation of the dataset, arguing that there are some essential structures and standards that must be fulfilled for such a process to be properly enabled.
Methods
The migration of the original dataset was performed in three sequential phases, retrieval from the original repository, a technical validation and re-deposition into the SPECTRa repository.
Retrieval
Both the Cambridge and Imperial-SPECTRa repositories are implemented using DSpace [1]. Although this software contains a component that can provide structured data representations of entries for harvesting (OAI-ORE [13] resource maps), this was not enabled on the Cambridge repository when we started our migration in July 2014. However, since the human-readable landing pages for each entry all conform to a structured HTML template, it nevertheless proved possible to extract all the data using ad hoc scripting and HTML processing (a process often informally referred to as "web-scraping"). This process was markedly inefficient, requiring three separate HTTP requests to the server per record, and took several days to complete. This approach is by no means unique; most large existing collections of (chemical) data require similar processes whereby a human has to initially read the documentation (if available) for the templates used to access the items and then to write appropriate custom codes or scripts to retrieve them. Such a method means that any unexpected change in the template resulting from, for example, the release of a new version of the dataset then inherits the risk of breaking these scripts. Stated more formally, the inferred uniform resource locators (URLs) for such collections of data are not persistent. The principal aim of our curation objectives therefore was to eliminate the need for such ad hoc scripting and replace it with a more efficient and standards-based workflow for achieving this persistence.
The following were retrieved from the original deposition [9] at Cambridge: • The source URL for 175,356 records.
• 175,356 documents in XML-CML syntax encoded using chemical mark-up language (CML) [14], containing a molecular structure from the NCI database and some metadata describing the entry. • 158,879 XML-CML documents containing the PM5 optimized coordinates of the NCI database structure and basic metadata, including the NCI identifier for version 3 of the NCI Open Database and the computed InChI and InChIkey [15]. Of these, 158,122 were found to be unique. The remaining 16,477 entries had no reported PM5 calculation. These entries were previously identified [7] as having additional complexities such as the presence of metal atoms or problems with correctly adding hydrogen atoms and charges, and so a PM5 calculation had not been attempted. Here we have adopted the same strategy of not recovering these entries in the present curation.
Technical validation
No metadata were provided in the original depositions that gave an unambiguous description of the two XML-CML documents present in the form of a CML schema declaration, and no MOPAC version information or MOPAC input or output files were saved to act as alternative sources of this information. The CML syntax corresponding to the annotation derived from PM5 optimisation in the form of files named e.g. nsc138467_post-mopac.cml in the original collection was incomplete; bond connection terms were missing and the CML documents failed validation according to the CML Schema version 2.4 [16]. The first task was therefore to develop a protocol to produce a reliable and valid input file suitable for re-calculating the properties using the newer PM7 method [10]. Many entries in the NCI collection comprise two or more disconnected components, of which only the larger component was retained in the original editing [7]. The resulting missing component in the starting structure was predominantly a counter ion and its removal requires a charge to be assigned to the remaining fragment. This information was originally captured in both original XML-CML documents, the first as part of an identifier element containing an early form of the InChI string: The second is declared more formally in the CML molecule element associated with the PM5 calculation: Of the 158,122 unique documents in the latter category, the formalCharge declarations were distributed as follows; 153,127 (0), 28 (−1), 4,456 (+1), 18 (−2), 483 (+2), 2 (−3), 3 (+3), 1 (+4), 4 (−5). Manual inspection of the species with very large formal charges (>|3|) indicates these are all errors arising from the original curation process because of incorrect interpretations of e.g. metal centres. Our original attempt to transform this information into a MOPAC input involved the standard OpenBabel [17] program, version 2.3.2. It transpired OpenBabel did not correctly propagate the charge information in either of the original CML files by transformation into an appropriate MOPAC keyword declaration such as CHARGE = 1. Instead the generic statement PUT KEYWORDS HERE was the only content of the MOPAC keyword line. This raises some interesting issues: 1. Absolute fidelity in any syntactic transformation of data from one format to another is very difficult to achieve. Thus there are often multiple syntaxes for any given information field, such as the two shown above for expressing the charge on a molecule, and all such variations must be honoured with complete fidelity to achieve reliability. Although some forms can be quickly deprecated (such as the first example above), these forms cannot be ignored and they must be processed. 2. The MOPAC program does not mandate the presence of all keywords. A calculation may still succeed on the assumption that a missing keyword simply defaults to a pre-determined value. In this case, MOPAC will assume that the value of an undeclared CHARGE keyword corresponds to zero, which is a clear error if the charge was intended to be non-zero. This issue of implicit semantics is perhaps the single largest problem in ensuring validation. It can be very difficult, if not impossible to find complete definitions of what implicit assumptions are made in any system. Often the only source of these is the actual computer code itself. 3. A further implicit rule for MOPAC keywords is that the spin-multiplicity of the system is computed from the total electron count after the appropriate charge is applied. For a system where a charge of e.g. +1 is left undeclared, that will result in a molecule with an odd number of electrons, and this is then treated implicitly as a molecule with a DOUBLET spin state. We also note that these implicit rules are not universal; other programs such as Gaussian use different conventions. 4. If the explicit keyword SINGLET (spin state) is declared, a safe assumption for virtually all real molecules that exist as physical samples, this can act as a checksum. The MOPAC program will then throw an error and the calculation will not proceed if this spin state conflicts with any declared or undeclared/ implicit charge.
Instead of using OpenBabel, we made a custom conversion of the original post-MOPAC PM5 calculation into CML files to ensure the correct keywords were written to the MOPAC input file. The atom positions were expressed in internal coordinates rather than cartesian coordinates. This is not a critical decision, since the final atom positions do not in general depend on the initial coordinate system selected. A PM7 geometry optimisation was then performed using the resources of the Imperial College High Performance Computing Service. The majority of calculations completed within tens of seconds and the total required approximately 20 CPU days of computer time.
InChI identifiers
An InChI identifier [15] is a canonicalization based on the atom connectivity of a molecule, which in turn is derived from Cartesian coordinates for each atom using simple heuristic rules specifying a range of atom pair distances for any element combination. These distance ranges are built into OpenBabel [17]. Unfortunately, atom connection distances are not formally defined as accepted standards, and the precise values are ultimately the choice of the designers of any program implementing them. The limits however are usually sufficient tolerant to cover the vast majority of real molecules without any disagreement, and this would especially be true of the NCI set which cover real systems rather than hypothetical or computed molecules. This does not entirely exclude there being a very small number of molecules where specific atom-pair distances might fall within e.g. a bond range using PM5-optimised coordinates but which are e.g. outside such a range using PM7 values. We note that whilst it is possible to replace these relatively arbitrary rules by using a quantum mechanically derived property of the electron density topology called the BCP (bond critical point) to define an atom-pair connectivity [18], this is not currently used for determining InChI identifiers.
babel-i xyz in.xyz -o inchi out.inchi
Commands 1-3 convert all the data into Cartesian coordinates to remove any possible atom connection data that might have been generated by MOPAC or other sources. Command 4 generates a canonical InChI identifier [15] using these coordinates. This process ensures that the connectivities created using the last command and then used to create the InChI are normalised against a single connection algorithm (being the one contained in OpenBabel, version 2.3.2). These InChI strings are then compared with those derived in a similar manner using the original NCI and the original PM5 computed coordinates (Table 1).
Of the 158,122 unique values (Table 1), 97.7 % matched for all three instances, which provides a great measure of confidence that the atom-connection algorithm is robust. To identify the origin of the 2.3 % of InChI mis-matches, we have to dissect the InChI identifier itself into its component layers: 1. The molecular formula layer (1131). 2. The pairwise atom connectivity layer, determined as described above (127). 3. The hydrogen layer, in which hydrogen atoms are added to all heavy atoms where a valence is perceived to be unsatisfied if the hydrogens are not already declared. Because we have subjected all the systems to computational quantum modelling, all hydrogen atoms are already explicitly defined in our coordinates (1252). 4. A charge layer, also defined for all the molecules in our collection (9). 5. A stereochemical layer. Because our coordinates are all specified in 3D space, the stereochemistry is always defined. This layer includes double-bond isomerism (292) and tetrahedral configurations (267). 6. An isotope layer (22).
The distribution of the 2,997 differences between the PM7 and the NCI InChI identifiers (2,041 + 470 + 486, Table 1) are shown in parenthesis in the listing above, and each is very briefly discussed below: Table 1 Comparison of generated InChI identifiers 1. The discrepancies in the formula layer originate from mismatches in the hydrogen count. This is because, historically, molecules were not always defined with explicit coordinates for all hydrogen atoms. Instead they were inferred from residual valences, these in turn inferred from bonding angles and other geometric and heuristic information. The process of replacing such implicit hydrogens with explicit ones is not always exact. 2. The connection layer mismatch originates from bonds that are on the verge of connection and derives from (possibly small) geometric changes from the quantum mechanical re-optimisation. A typical example of such uncertainty are putative S…S bonds in sulfur species [19]. 3. This and the formula layer together account for the great majority of the mis-matches. 4. The small number of mis-matches in the charge layer may result from the InChI code heuristic for deciding the appropriate charge for a molecule. As noted above, we detected some unreasonable high charges resulting from this process. 5. Because traditionally molecules were expressed in the MolFile V2 format which allows just 2D coordinates to be defined, stereochemistry had to be added using an additional parameter associated with each bond connection and equivalent to the stereochemical wedge notations used in organic chemistry. This information is not free of ambiguity, since the stereochemistry is defined relative to other atoms and can lead to logical contradictions. When such two dimensional coordinates and this additional information is converted into 3D coordinates (a process carried out during the original deposition [7]), ambiguities can result. 6. Isotopes were not included in the MOPAC-PM7 calculation.
Re-deposition
For each remaining entry, the PM7-derived InChI strings and keys were added to the SMILES strings and the NCI and CAS accession identifiers obtained from the original data and propagated as metadata. We note that the NCI identifiers themselves may not necessarily persist across different versions of the NCI database, which was version 3 at the time of the original curation and has subsequently been updated to version 4 in 2012 [20]. Prior to import to SPECTRa, each entry was packaged individually to produce an archive file, termed a SWORD [21,22] bundle. SWORD (Simple Web-service Offering Repository Deposit) is an interoperability standard for data ingest into digital repositories, rendering these bundles suitable for import into any SWORD-compliant repository, not just Dspace-based SPECTRa. The bundles contains a METS manifest [23] and data files and were created using a locally written tool.
The METS manifest contained the following metadata: • InChI and InChIKEY and SMILES string.
• CAS and NCI accession IDs, NCI entry name.
• Back-link back to the entry in the Cambridge repository. • DOI link to the published description [7].
• Link to Creative Commons License terms.
The datafiles included within the bundle were: • Two CML files [14] containing unaltered copies of the NCI coordinates [20] and PM5 computed MOPAC output documents obtained from the original source repository. • A third CML file conflating the three previous structures, containing the original NCI structure, the original PM5 structure from the original repository and the newly computed PM7 structure. • MOPAC input and output files for the new PM7 calculation.
Import of this fileset to the destination SPECTRa repository was performed using the SWORD web service interface. Owing to a limitation of the Dspace-SPECTRa SWORD interface, no bulk-import function was available and all of the new packages had to be to uploaded individually, a process that took approximately 60 days. Doubtless this exceptionally long time resulted from some undiagnosed server misconfiguration and should not be considered a representative characteristic.
Exposing the metadata structures on DSpace-SPECTRa
The outcome of the curation process resides in a new collection on the SPECTRa repository comprising 158,122 entries. The new curation has two persistent identifiers for the collection itself [25] and within that collection, individual molecular entries are themselves also assigned two persistent identifiers, as for example the entry shown in Figs. 1 and 2 [26,27]. The first of these is the handle with a registered prefix 10042 associated with the SPECTRa DSpace server. The second is the DataCite DOI associated with the prefix 10.11469 as registered to Imperial College, with individual entries prefixed with the common string ch/ to indicate the chemistry department at that institution. The individual items in the collection also have a full set of associated metadata descriptors (Fig. 1).
Newly introduced metadata since the creation of the original collection include the following: • The contributors are listed individually, with each name linked to their corresponding ORCID [24] entry page. • The computational resource used for annotation is also linked with a non-persistent identifier; currently to the Web landing page for the organisation. • Several chemical identifiers are included such as SMILES, InChI and the CAS accession number. The significance of including such metadata is that it is registered automatically with DataCite. org [28], and hence available for fielded searches [29]. • The ORCID entries [24] for all collaborators are explicitly listed, and again become available for searching [29]. • A back-link to the original item deposition [9] allows comparison of the original and the newly curated entry. Because this handle prefix (1810) is unregistered with CNRI, the central authority for Handle registration, it cannot be treated formally for resolution as a persistent identifier. This is one of the aspects we wished to rectify in the current curation. • There is also a persistent identifier link to the journal article [7] describing the original work. In due time, the present article could itself be so-referenced in a future curation. • A pair of new persistent identifiers for each molecule has been minted as part of the curation. The first is a handle assigned using the Handle manager tool in DSpace itself and which can be resolved using either of the services http:// hdl.handle.net/ or http://doi.org/. This handle is internally annotated with 10320/loc records [30] to enable automated retrieval of individually requested files from the deposition. The prefix 10042 is registered to the SPECTRa server. • The second persistent identifier [27] is assigned using the DataCite API [28], and serves as a mechanism to allow DataCite to acquire the metadata for this entry. The prefix 10.14469 and the suffix ch are as described above. • A (non-persistent) link to the original publisher is included. • A (non-persistent) link to the open license for the data, in this instance Creative Commons Attribution (CC0) [31]. It is perhaps surprising that this license is itself not identified by its own persistent identifier, but the URIs for the CC licenses and the corresponding resources are however machine-processable.
This metadata describes the contents of the data files resident for each entry (Fig. 2).
The fileset for the deposition comprises two so-called bundles. The first item is identified internally as the SWORD bundle. This compressed archive contains the METS manifest [23] for the deposition, expressed syntactically as an XML file containing a number of declared namespaces defining various metadata schemas. The METS manifest, along with another internal XML document, the OAI-ORE resource map [13] defines the contents, locations and properties of the documents comprising the collection.
The second item (Fig. 2) includes three XML files expressed syntactically as XML documents declaring the CML schema [16]. One can find a semantically rich encoding of the molecular information within each file. Also included in this fileset are three files relating to the MOPAC program: the input file, the corresponding PM7 output and PM7 archive file summarising the computed properties. In principle, all the information in these files could also be absorbed into the CML descriptors, although this has not been done in the present instance. These files in turn have associated MIME types [32], information that allows automated retrieval of the files using one of the mechanisms briefly described below.
Metadata interfaces to DataCite
In curating the original collection at Cambridge by relocating it to a separate DSpace server, we wished to ensure that new persistent identifiers for each entry could be minted using DataCite. That in turn required the metadata follows the Dublin Core Schema held on the DSpace-SPECTRa repository to be mapped onto the DataCite Schema using an XSLT-based crosswalk transform. The following procedure was used to achieve this.
• A recent release of DSpace (DSpace4) largely automates the minting of DOIs using DataCite. Our target DSpace (SPECTRa) is running version 1.8; the DOI module for DSpace 4 is confined to a few distinct packages that were implemented into version 1.8 without affecting the other components. The following Java packages were extracted from DSpace4 and used within DSpace 1.8: • • DOI-specific properties in the existing install were configured via dspace.cfg. An auxiliary configuration file spring-dspace-addon-identifier-services.xml is packaged within the org.dspace.identifier package and used for connection details. • Configuring the XML schema transformation that translates or "crosswalks" between the DSpace Dublin Core metadata schema and the DataCite metadata schema. DSpace4 delivered the requisite crosswalk, DIM2Datacite.xsl, for version 2 of the DataCite schema. • A requirement was to provide metadata that described the locations, filenames and file types of the individual datafiles associated with each DOI, in order to provide a machine discoverable and operable path from the DOI directly to the files containing chemical data. To achieve this, the DSpace 4 XML schema transformation (crosswalk) was extended to include the locations of the METS and OAI-ORE metadata files that are generated by DSpace, as relatedIdentifiers. These related identifiers used the HasMetadata relation type which was introduced in version 3.0 of the DataCite Schema: </relatedIdentifier > Both the METS and OAI-ORE files contain the desired metadata and can be processed as required. As an example, the fileSec section of the METS is show in part below: At this stage, the DataCite Search API [29] proved to be a useful tool for checking the quality and validity of the curation and its metadata. Search queries were used to retrieve lists of all entries belonging to the new DSpace-SPECTRa collection in an easily parsed format and with the necessary metadata to identify discrepancies, such as duplicate DSpace depositions, duplicate assigned DataCite DOIs or corrupted or invalid metadata. Some examples of such use are collected in Table 2 and are also described below. An advantage is that this kind of analysis can be done without privileged access to the host repository and its underlying databases, which makes it easier for peers and users to scrutinize the quality of large open data collections and flag any potential errors.
Results and discussion
The configured metadata infrastructures now associated with each item in the collection enable individual ID = "file_1367638" MIMETYPE = "chemical/ x-cml" SIZE = "18955" CHECKSUM = "88761c87f8f090182d910f33a7467435" > datafiles to be accessed based only on knowledge of the persistent identifiers and media type, which can be allowed to default to specific type. We have implemented three procedures for doing this; these are fully described elsewhere with discussion of the pros and cons of each approach [33,34]. Here we provide only a summary of these methods.
1. The first access method to be developed [33] is based on extensions to CNRI Handle record types known as 10320/loc [30]. These allow the handle record to be retrieved using the Handle REST API, which allows programmatic access to handle resolution using HTTP. A typical invocation would be using a URL of the type http://doi.org/10042/31117?locatt=mim etype:chemical/x-cml where the string 10042/31117 is the assigned Handle identifier and chemical/x-cml the requested media type. 2. The DataCite Media API also allows a DOI to be resolved based on the media type of the required document, typically a URL of form http://data.datacite.org/chemical/x-cml/10.14469/ch/153690, where the string 10.14469/ch/153690 is the assigned Data-Cite identifier, and chemical/x-cml the requested media type [34]. This URL can be passed to any requesting program and the file associated with this information will then be retrieved from the repository. 3. OAI-ORE Resource Maps exposed through DataCite metadata. We have made the OAI-ORE Resource Map [13] and the METS manifest [23] (both generated internally by Dspace) discoverable by including their locations as relatedIdentifiers within the DataCite metadata for the dataset [35], as described above. This allows a script to query for example the resource map to retrieve the URL associated with the data file. Again, the only information required by the script is datacite_jmol('10.14469/ch/153690?chemical/xcml'), where datacite_jmol is the Javascript function written to process the responses [36].
Any of the above methods [34] can be used in conjunction with e.g. a visualisation program which can convert the data contained in the retrieved file into a graphical representation, or as part of a script which could retrieve a greater number of files for the purpose of e.g. data mining.
Data discovery and datametrics
Enhancement of the original Cambridge dataset with the features described above greatly improves the discoverability of the data. Enriching metadata and then exposing it in a manner that allows the Datacite organisation to harvest it enables exploitation using the DataCite interface [29] and allows statistics to be collected [37]. Examples of both are shown in Table 2.
The current DataCite search resource is still styled beta, and it is probable that the features offered in the future will become greatly enhanced.
The benefits of achieving SWORD/OAI-ORE and METS-enabled endpoints
Perhaps the most significant technical improvement realised as a result of this activity is the facilitation of future curation efforts, as part of a strategy to address the issue of what has graphically been described as link rot [38], whereby a worryingly large proportion of nonpersistent identifiers used to cite data and associated information are found not to link correctly after just a few years or in some cases months. Digital repositories are intrinsically designed to enable replication of content to other locations whilst preserving essential information such as persistent identifiers. Here we focus on the DSpace repository, which provides an OAI-ORE endpoint implementing the Open Archives Initiative's Object Reuse and Exchange standards [13] to achieve such replication. The ORE manifest for the deposition illustrated in Figs. 1 or 2 for example is declared in metadata as: for the METS manifest (see above). These locators derive from the assigned handle for this entry as 10042/159060. For each entry, a structured XML representation of the data (for example PM7.xml), including a declared standard XML schema (CML 2.4) is included. This allows the data to be directly parsed using a generic XML import/export tool, so enhancing any future wholesale export of the dataset. The use of XML is to be preferred to older legacy chemical formats, for which no explicit schemas are, or indeed can be, declared.
The following illustrates a programmatic method for a curation procedure that could be employed if starting from a SWORD [21,22] /OAI-ORE [13] and/or METSenabled endpoints.
• Obtain a list of all the individual entries for the collection. This is accomplished by using DataCite to search for any unique identifier associated with the collection, which is defined in this example by the string alternateIdentifier:NCI: This reveals that ORE and METS manifests are associated with the Related identifier metadata element, and the direct path to each is obtained from the value of the HasMetadata child.
• These provide programmatic access (using XSLT transforms or other methods) to the METS bitstream itself, which contains all the files in the deposition as a compressed archive. The METS bitstream has the URL: • This is retrievable using:.
• Finally in this section we note PREMIS, another international standard for metadata supporting the preservation of digital objects to help ensure their long-term usability [39]. Currently, the PREMIS Schema is only used in DSpace instances to represent technical metadata about DSpace bitstreams (i.e. files), being generated by a PREMIS crosswalk.
Comparison with other repositories
Here we compare our approach for data deposition with that of two alternative existing data repositories, one of which is also based on DSpace (Dryad [40]) and a second Figshare [41] that is not. The first is run as a not-for-profit organisation that offers data deposition services, with persistent identifiers provided by both the DSpace handle manager and also via DataCite. Dryad deploys a subset of the metadata configured for our SPECTRa server, but significantly this does include [42] an OAI-PMH based programmatic method for access to the data object via the METS manifest, allowing a procedure similar to the OAI-ORE resource map outlined above to be used to access the datafile. Dryad differs in one significant regard from our approach in terms of the granularity of the deposition. Since our data is based on the computed properties of discrete molecules, we have adopted the strategy of one data record per molecule, and hence the dataset for each molecule is also assigned its own persistent identifiers. In contrast, the primary model used by Dryad offers coarser granularity of one data record per associated publication whereby the complete Dryad data set is linked with a peer reviewed journal publication. The net result is a pair of persistent identifiers, one for the article and one for the data, with the data component embargoed until the article itself is released after peer-review into the public. We do not regard this approach as an optimal one when dealing with molecular data, since it cannot permit any discovery process for individual molecules contained in such a collection.
DOIs can also be minted using the current (2015) version of Figshare using the DataCite API. This commercial repository is not currently OAI-PMH/OAI-ORE compliant and so no standard ORE or METS resource maps are declared to DataCite using e.g. the related identifier element of the DataCite metadata schema. This lack of OAI-PMH/OAI-ORE compliance would render a lossless curation of our SPECTRa collection to e.g. Figshare more difficult to achieve programmatically, but such an operation is not excluded in the future when the functionality becomes available.
Comparison with two other collections of molecular quantum mechanical calculation data
We first return to discussing the article reporting the results of a stochastic exploration of the structures predicted using quantum mechanical procedures [11]. Initial approximations based on approximate methods are refined using much higher levels of theory. The molecular coordinates for unexpected, unusual or interesting outcomes from this procedure were then deposited into the supporting information (SI) associated with the published article. This contains just 10 species, although clearly far more molecules were computed at various levels of theory and these now appear lost to science. The SI itself takes the form of a paginated PDF file downloadable from the article landing page, and which contains no exposed associated metadata for any individual entry. Discussion is included here because it is very typical of the data associated with studies of this type. Curation of such data is really only worthwhile if it is first aggregated into a larger collection, a process that is never attempted because of this formal lack of metadata. The resulting fragmentation and hence loss of valuable data is, we argue, one of the broken aspects of current publishing models that require urgent attention.
As with the previous example, the next article [12] describes quantum mechanics based procedures to obtain the molecular structures of a much larger collection of 134 kilo-molecules and the subsequent methods involved in creating a digital repository based collection of these. Depositing all the calculations recovered from this process goes one important stage beyond the previous example, and is therefore to be welcomed. However, an important unanswered question is how easy would it be to curate this collection in a decade from now. In fact, several fundamental design features [12] have made such an operation unnecessarily difficult.
• The entire dataset is associated with a single persistent identifier [43] expressed as a compressed archive that a user can download and expand into a folder containing 133,886 individual files. The collected metadata however does not refer to these files, but to the folder containing them, which in turn means that the contents of this folder are in effect not discoverable using the mechanisms described above. • In general, it is quite difficult on most computer systems to navigate a single folder containing 133,886 items. One would have to resort to using specialised software to do this, and this would probably restrict inspection to individual files and not to a sub-collection with specified properties.
• The individual entries adopt the original XMol XYZ syntax. That syntax has then been annotated with a number of other properties, both to the individual atoms and to the molecule as a whole, the latter including both SMILES and InChI strings. Unfortunately, this annotation is in effect ad hoc in a manner that was not envisaged for the original XYZ format. A human has to read the associated documentation to establish the precise meaning of the annotations, and then write suitable code to extract the annotations to render them useable for e.g. metadata. It is in general uncertain whether software that has been written to process standard XYZ files lacking annotations could successfully cope with this additional content. At best, one might expect the annotations to be simply discarded, since their semantics are not accessible to such a program, only to a human. At worst, it could render the document entirely unreadable by standard software. • The individual files themselves contain no information about the procedure used to compute the coordinates. In this regard, it would be quite difficult to use these files to reproduce the original calculation; thus the original program inputs are not available, nor indeed are the original program outputs from the quantum mechanical calculation. Curating such a collection therefore would require bespoke interpretation by a human, which always tends to be an expensive and error-prone solution.
The Harvard Clean Energy project [44] is another recent deposition based on quantum chemical calculations, with a claimed 2.3 million molecules associated with an even more impressive 150,000,000 DFT calculations. Access to any individual calculation on any specific molecule however is available only via a search front-end to the database based on specified search parameters. No metadata is exposed on any molecule or its calculation parameters in any standard form and it is difficult to envisage any type of curation that could be successfully applied to such a collection. We think it unlikely that enabling open curation was a design feature of the system, although we also believe that this should be included in future designs of such collections.
The recently announced CERN OpenData Portal [45] is also included here, since the data described is very different from the chemical information described above, both in terms of the cost of its acquisition, and of its size and granularity. The organisational prefix for the collection is 10.7483 and this reveals (in December 2014) 53 entries. A typical entry [46] itself contains 3211 datafiles totalling 3.4 TB in size. Analysing this data requires very specialised software, which is itself assigned a persistent identifier [47]. The software is distributed as a virtual image and is designed to be used in the form of a virtual machine containing all the tools required to acquire and analyse the data. The equivalent in our own implementation is the virtual JSmol container for the chemical data [48], that is made available indirectly in the web browser document object model (DOM) as an HTML5 canvas, rather than as a virtual instance on a computer. Working outside the virtual containers provided by the CERN data portal is unlikely to be useful, whereas for chemistry the JSmol container could be replaced by other containers such as e.g. Avogadro 2 [49].
Conclusions
This brief survey of two recently published molecular data collections indicates that each subject domain will benefit from specifically optimising the features of repository collections for its own needs. We believe that in the chemistry domain, it is useful to adopt a molecular granularity and to develop metadata, search and acquisition mechanisms appropriate for this granularity, even at a scale of 2.3 million molecules. We think it less useful to aggregate the molecules into single containers for which metadata about individual molecules is not exposed. It is also essential that the procedures adopted are programmatic, in that all the required information to re-curate the dataset is available for machine processing. If this is so, then there is no reason why the process could not scale well beyond 2.3 million molecules if required.
Code availability
The MOPAC software, including the latest PM7 parameter set [10] can be obtained and licensed from http:// openmopac.net. The DSpace software itself is open source [1]. The SpectraDSpace DIM2DataCite crosswalk is archived [50]. The Javascript routines implementing [36] the functionality described in the results [34] section are available via the repository entries cited in ref 36. | 9,375.8 | 2015-08-27T00:00:00.000 | [
"Chemistry",
"Computer Science"
] |
TDP-43 regulates GAD1 mRNA splicing and GABA signaling in Drosophila CNS
Alterations in the function of the RNA-binding protein TDP-43 are largely associated with the pathogenesis of amyotrophic lateral sclerosis (ALS), a devastating disease of the human motor system that leads to motoneurons degeneration and reduced life expectancy by molecular mechanisms not well known. In our previous work, we found that the expression levels of the glutamic acid decarboxylase enzyme (GAD1), responsible for converting glutamate to γ-aminobutyric acid (GABA), were downregulated in TBPH-null flies and motoneurons derived from ALS patients carrying mutations in TDP-43, suggesting that defects in the regulation of GAD1 may lead to neurodegeneration by affecting neurotransmitter balance. In this study, we observed that TBPH was required for the regulation of GAD1 pre-mRNA splicing and the levels of GABA in the Drosophila central nervous system (CNS). Interestingly, we discovered that pharmacological treatments aimed to potentiate GABA neurotransmission were able to revert locomotion deficiencies in TBPH-minus flies, revealing novel mechanisms and therapeutic strategies in ALS.
A common characteristic shared by several neurodegenerative diseases is the dysfunction of the RNA-binding protein TDP-43, a member of the heterogenous nuclear ribonucleoproteins (hnRNPs) family 1,2 . TDP-43 is a protein involved in several aspects of RNA metabolism including pre-mRNA splicing, mRNA transport and micro RNA maturation [3][4][5][6] . A breakthrough link with neurodegenerative diseases came in 2006, when TDP-43 was identified as the main component of the ubiquitinated cytoplasmic inclusions in ALS and frontotemporal lobar degeneration (FTLD) [7][8][9] . TDP-43 pathology is currently described in a large proportion of cases of Alzheimer's disease as well as Parkinson's and Huntington's disease [10][11][12][13][14] . Attentive studies aimed to understand the normal function of TDP-43 and its participation in the mechanisms of neurodegeneration have, therefore, became critical to establish the metabolic pathways implicated in TDP-43-mediated neuronal toxicity. In this direction we have previously indicated that TDP-43 is required to regulate the synaptic levels of GAD67, the enzyme responsible for converting glutamate to γ-aminobutyric acid (GABA), suggesting that modifications in the glutamate/GABA neurotransmitter balance may affect neuronal survival in TDP-43 perturbed brains 15 . In this direction, evidences of the involvement of glutamate in the pathogenesis of ALS disease were already suggested by several studies [16][17][18] . However, treatments aimed to prevent glutamate mediated excitotoxicity have failed to correct the clinical symptoms of the disease, revealing that different mechanisms might be at work besides the excessive availability of glutamate. In support of this hypothesis, it has been reported that GABA levels are reduced in motor cortex of patients with ALS and demonstrated that TDP-43 overexpression increased GAD67 protein levels and GABA release in mouse forebrain [19][20][21][22] , implying that the synaptic transmission mediated by GABA might be affected or/and may have a role in TDP-43 defective brains. In this study we investigated the mechanisms by which TDP-43 regulates the cytoplasmic levels of GAD1 and determined the role of GABA neurotransmission in TDP-43 pathology using Drosophila.
Results
Drosophila TDP-43 regulates the expression levels of GAD1 by facilitating its pre-mRNA splicing. We have previously demonstrated that the protein levels of GAD1 appeared downregulated in TBPHnull flies, however, the mechanisms behind these modifications were not identified 15 . In order to address this question, and considering that TBPH is involved in the regulation of different aspects of RNA metabolism, we decided to investigate whether GAD1 RNA splicing was affected in TBPH-minus alleles (two TBPH null allele strains, named Δ23 and Δ142, in which TBPH protein production is completely abolished, see 23 www.nature.com/scientificreports/ purpose, we designed a set of primers specific to discriminate GAD1 pre-mRNA and another one specific for mRNA (respectively the intronic set and the exonic set, (see Fig. 1a,b), and quantified their expression levels by qRT-PCR. Interestingly, we found that the pre-mRNA of GAD1 appeared upregulated in the brains of the TBPH-mutant alleles compared to controls (Fig. 1b, left graph). On the contrary, the mature mRNA transcript of GAD1 was downregulated in the mutant flies judged against controls (Fig. 1b, right graph) indicating that TBPH is required to regulate the processing of GAD1 mRNA, most probably, through the splicing of the mature transcript in Drosophila brains. In agreement with this possibility, we observed the presence of putative binding sites for TBPH in the long intron present at the 5'UTR region of GAD1, between the non-coding exon1 and the coding exon2 (Fig. 1c, upper scheme). In order to test if the absence of TBPH influenced the processing of these segments, we constructed a minigene containing the genomic sequences described above (Fig. 1c, upper left scheme). The GAD1 pre-mRNA 5'-UTR minigene also presented an EGFP cloned in frame with the second exon of GAD1 that carried the original ATG starting codon. The construct was placed under the control of the GAL4-UAS system and used to transfect Drosophila S2 cells under the Actin promoter (actin-GAL4). Thus, S2 cells co-transfected with an RNAi against TBPH showed a strong reduction in the expression of the EGFP protein compared to control cells treated with an RNAi against luciferase in western blot assays ( Fig. 1c right panel, quantified in the left graph). These results show that TBPH function is required for the proper splicing and expression of the GAD1 pre-mRNA 5'-UTR minigene and strongly suggest that similar alterations may explain the defects detected in the processing of the immature GAD1 mRNA in TBPH-null flies.
Reduced levels of GABA neurotransmitter in TDP-43-null brains affect locomotor behaviors. The alterations in the processing of GAD1 immature mRNA described above insinuate that GABA neurotransmission might be affected in TBPH-null flies. In fact, the synthesis of GABA occurs through the conversion of glutamate by GAD1 enzyme; then GABA is loaded into vesicles by the vesicular GABA transporter (vGAT) 24 . GABA receptors are inhibitory receptors that respond after GABA released. The GABA receptors are divided into two types: the ionotropic and the metabotropic. To date in Drosophila three ionotropic subunits and three metabotropic subunits have been described (see Table 1). In order to address this hypothesis, we decided to test whether modifications in GABA signaling were able to influence the locomotor phenotypes described in TBPH deficient flies. For these experiments, we took advantage of the GAL4-UAS expression system to simultaneously co-silence the endogenous TBPH protein together with the various GABA ionotropic and metabotropic receptors. Thus, specific RNAi against the GABA A type receptor (RDL), the ligand-gated chloride channel (LCCH3), the GABA B type receptor 1, 2 and 3 (R1, R2 and R3) were expressed, either alone or together with an RNAi against TBPH, under the neuronal driver elav-GAL4. As a result, we found that the suppression of the GABA receptors strongly enhanced the motility problems present in TBPH-defective flies compared to control insects expressing the RNAi against the GABA receptors alone ( Fig. 2 and Supplementary Fig. 1).
In the same direction, we utilized a specific antibody against GABA to quantify the intracellular levels of the neurotransmitter in third instar larval brain. After the staining, we found that the levels of GABA intensity appeared significantly reduced in TDP-43-null brains compared to wild type controls ( Fig. 3a,b,e). Interestingly, we found that the neuronal transgenic expression of GAD1 or the endogenous protein TBPH in TDP-43-minus backgrounds were able to recover the expression levels of GABA in Drosophila brain (Fig. 3c,d,e), indicating that these results are specific and the regulation of GAD1 levels is critical to prevent GABA neurotransmission defects and neurodegeneration in TDP-43-defective brains. to determine if the reduced levels of GABA detected in TBPH-minus brains were related with the neurodegenerative phenotypes described in these flies, we decided to treat TBPH-null flies with different agonists of GABA neurotransmission. For these assays, a potent GABA uptake inhibitor, Nipecotic acid (200 µM) was added to the fly food during larvae development and we found that the administration of this compound was able to consistently improve the peristaltic movements of the TBPH-minus third-instar larvae (L3) compared to untreated mutants or wild-type controls (Fig. 4a).
The positive effect of this pharmacological treatment became more obvious when TBPH hypomorphic background was utilized. As a matter of fact, we found that 200 µM of Nipecotic Acid dispensed to flies expressing TBPH-RNAi in neurons (UAS-Dcr-2/ + ; tbph Δ23 , elav-GAL4/ + ; TBPH-RNAi/ +) were sufficient to recover motility (Fig. 4b), indicating that GABA neurotransmission plays an important role in TBPH-mediated neurodegeneration. We employed a similar approach utilizing a different drug, the Muscimol (100 µM) (a known GABA A receptor agonist), this showed a significative rescue of L3 larvae motility in TBPH hypomorphic background-null alleles compared to controls (Fig. 4c).
Discussion
Defect in the neuronal handle of the excessive formation of glutamate, has been long implicated in the mechanisms of neurotoxicity behind ALS. However, growing evidences suggest that the glutamate-induced excitotoxicity hypothesis, alone, is not sufficient to explain the hyperexcitability of the motoneurons observed in ALS. In effect, one of the few approved compounds to treat ALS, riluzole, besides its anti-glutamate action directly interact with GABA receptors, signifying that the activation of inhibitory neurotransmission may also play a role in the disease. In this direction, a reduced GABA levels in the motor cortex of ALS patients were previously describe 19,20 . Further evidences came also from SOD1 (G93A) mouse model 25 , in which the glutamatergic neuronal hyperexcitability was associated to a decreased GABAergic intracortical inhibition 22,26 . On a different ALS mouse model, the wobbler mice 27 , similar evidences were described including a reduction in vesicular GABA transport followed by a decreased GABAergic inhibition 28 . Suitably, alterations in GABA neurotransmission were described in patients suffering from Alzheimer's disease 29 , Parkinson's disease 30 and related neurodegenerative process in addition to ALS 20,31 . Nevertheless, despite the evidences indicated above, the role of GABA and others inhibitory neurotransmitters balance in TDP-43 pathology was not completely clarified. In that respect, our data revealed that TBPH, the TDP-43 conserved protein in Drosophila, plays an important role in maintaining the equilibrium between neuroexcitatory and inhibitory signals through the regulation of GAD1, the enzyme responsible for the synthesis of GABA after the cleavage and processing of glutamate. On that point, we found that TBPH has an important role in the modulation of GAD1 mRNA splicing. Thus, we observed that a reduction of TBPH promoted the accumulation of GAD1 pre-mRNA transcripts with the subsequent reduction in the processing and generation of mature GAD1 mRNA and protein. Regarding the molecular mechanisms, notwithstanding the presence of putative binding sites for TBPH in the 5'UTR region of GAD1 mRNA, we did not find a direct binding between TBPH and GAD1 mRNA 15 . Successively, we observed that the overexpression of TBPH was not sufficient to modify the pre-mRNA processing of GAD1 either ( Supplementary Fig. 2), suggesting that TBPH may act in consonance with other RNA-binding proteins in the formation of GAD1 mRNA splicing complexes 32,33 . Regarding the physiological consequences of GAD1 reduction in TBPH/TDP-43-minus flies, we have described that the genetic correction of GAD1 levels was sufficient to restore locomotive behaviors and neurotransmitter balance in vivo 15 , supporting the idea that defects in inhibitory signals additionally contribute to motoneurons degeneration. Moreover, our results provide molecular explanations about the mechanisms that may derive from defects in TBPH/TDP-43 to alterations in neurotransmitters balance in ALS patients. In relation with that, we exhaustively analyzed the role of each GABA receptors encoded in the Drosophila genome, in terms of their capacity to influence the neurological phenotypes described in TBPH/TDP-43-depleted flies. Therefore, we observed that the silencing of GABAergic receptors provoked a predominant worsening of the locomotory phenotypes in TBPH-RNAi treated larvae (Fig. 2), and no major differences could be distinguished 34 . On the other hand, clinical trials using GABA agonists to treat ALS patients have not yet yielded significant results suggesting that the modulation of GABA neurotransmission alone may not be sufficient to overcome the complete symptoms of the disease 35 . More sophisticated therapies, perhaps combining GABAergic with anti-glutamatergic approaches, might be required to re-stablish the neurotransmitter balance in TDP-43 affected brains. In support of this hypothesis, gene-based therapies using GAD1 to modulate neuronal signaling are being used to treat patients with Parkinson's disease and similar approaches could be evaluated in patients with ALS 36,37 . www.nature.com/scientificreports/ Altogether, our results reveal a novel molecular mechanism behind TDP-43-derived pathologies and put forward novel therapeutic strategies aimed to potentiate inhibitory signaling or balancing the levels of neurotransmitters may succeed to control motoneurons hyperexcitability in ALS affected individuals.
Materials and methods
Fly strains. The fly genotypes used for the experiments are indicated hereafter: w 1118 -OregonR-w;tbph Δ23 / CyO GFP (null TBPH allele, ref 23 )-w;tbph Δ142 /CyO GFP (null TBPH allele, ref 23 Climbing assay. Freshly eclosed flies were transferred in new food vials and let to adapt for 3 to 4 days. A female-male ratio of 1:1 was maintained. After this period, they were moved, without anesthesia, to 15 ml glass cylinders, tapped to the bottom and let climb taking advantage of their natural drive for negative geotaxis. The number of flies that reached the top of the tube in 15 s was counted, and converted in percentage. Larval brain, microscope acquisition and quantification. Larvae were selected and dissected as previously described in 38 . Briefly, for whole larval brain staining, previously selected larvae were dissected on Sylgard plates in Phosphate Buffer (PB), brains were removed and fixed 20 min in 4% formaldehyde in Phosphate Buffer with 0.3% Triton X100 (PBT), blocked in 5% Normal Goat Serum (NGS), labelled with primary and secondary antibodies and mounted on microscope slides in Slow Fade (S36936, Thermo Fisher Scientific). Brain images were acquired on 20 × at a 0.6-fold magnification on a LSM 880 Zeiss confocal microscope and then analyzed using ImageJ. Z-stacking was performed and the ratio of GABA/Elav Max Intensity was calculated. Cell culture and RNA interference. S2 cells were cultured in Insect-Xpress medium (Lonza) supplemented with 10% fetal bovine serum and 1X antibiotic-antimycotic solution (#A5955, Sigma). RNA interference was achieved using HiPerfect Transfection Reagent (#301705, Qiagen) and siRNA specific for Drosophila TBPH (5'-GGA AGA CCA CAG AGG AGA GC-3'), as control siRNA for luciferase was used (5'-UAA GGC UAU GAA GAG AUA C-3'; Sigma). Immediately before transfection 2 × 10 6 cells were seeded in 6-well plates in 1.4 ml of medium containing 10% fetal serum. 2 µg of each siRNA, were added to 91 µl of Opti-MEM I reduced serum medium (#51985-026, Thermo Fisher Scientific), incubated 5 min at room temperature and subsequently 6 µl of HiPerfect Transfection Reagent were added. The silencing procedure was performed again after 24 h. Plasmid containing the Gad1 mini-gene and GAL4 were co-transfected (0.3 µg each) at 24 h together with the second siRNA dose. Cells were analyzed after 72 h of the initial treatment.
GAD1 minigene construction.
A mini-gene containing the 5' UTR region of Gad1 spanning from exon1 to exon2 (ref isoform FBtr0073276) in frame with a reporter EGFP has been cloned in pUAST attB plasmid. The minigene was co-transfected with p-actin-GAL4 in S2 cells. The EGFP expression was analyzed in S2 cells treated with siRNAs against TDP-43 or luciferase.
RT-PCR.
Adult heads were mechanically squeezed to proceed with RNA extraction, using RNeasy Microarray tissue kit (#73304 Qiagen) and treated with Turbo DNA-free kit (#AM1907 Ambion). Retro-transcription was performed using oligo-dT primers and SuperScript III First-Strand Synthesis (#18080093 Invitrogen). Gene specific primers were designed for amplification: | 3,588.2 | 2021-04-16T00:00:00.000 | [
"Biology"
] |
Crystal landscape in the orcinol:4,4′-bipyridine system: synthon modularity, polymorphism and transferability of multipole charge density parameters
The role of the supramolecular synthon as the operational structural unit in the late stages of the crystallization event is highlighted with reference to polymorphs and pseudopolymorphs in the orcinol–bipyridine cocrystal system.
Introduction
Crystal engineering is concerned with the development of logical design strategies based on the concept of the supramolecular synthon (Desiraju, 1995) and the execution of such strategies to obtain entire families of related crystal forms of a series of chemically similar molecules. The purpose of obtaining these engineered structures is to achieve physical and chemical properties of interest and utility (Desiraju, 1989;Desiraju et al., 2011). At a more fundamental level, crystal engineering may be reduced to elucidating the mechanism of crystallization (Weissbuch et al., 2003;Erdemir et al., 2009). Given any molecular structure, what is the crystal structure that would be obtained? If this question could be answered fully, the essential problem of crystal engineering would be solved because any pre-desired crystal structure could then be obtained at will. However, it is not likely that such an answer will be available anytime soon. The issues involved in the aggregation of molecules into clusters, larger ensembles and finally the events that lead up to nucleation and beyond are still way too complex to be addressed experimentally or computationally, in any general sense. Crystallography provides images of the 'final' outcomes of the crystallization event, but the constraints of long-range periodicity that are implicit for any species that gives a three-dimensional diffraction pattern hardly reveal the multiplicity and variety of chemical events that have taken place before the crystal is obtained. Perhaps there is still some justification in Ruzicka's dismissal of solids as chemical cemeteries (Dunitz et al., 1988).
Still, and even within the limits imposed by diffractionbased crystallography, one might explore a small portion of the structural panorama that just precedes the 'final' crystal because there are several higher energy crystal forms that may be isolated and characterized with crystallography that provide a hint about the mechanism of crystallization, at least in the later stages (Davey et al., 2006;Kulkarni et al., 2012;Hunter et al., 2012;Davey et al., 2013). These forms could include polymorphs with higher values of Z 0 , various solvates, kinetically labile species and other metastable and higher energy forms of the compound in question Braun et al., 2012). Taken collectively, one might envisage these forms as constituting a kind of landscape that profiles the structural and energetic changes that take place during the late stages of crystallization of an organic compound (Gavezzotti, 2003;Blagden & Davey, 2003;Price, 2008). Some of us have shown recently, using the example of fluoro-substitution in benzoic acids, that subtle chemical variation of a molecular scaffold permits the exploration of structural space that would otherwise be experimentally inaccessible (Dubey et al., 2012).
The formation of two-component molecular crystals, or cocrystals (Desiraju, 2003;Dunitz, 2003), is a well researched aspect of modern crystal engineering (Herbstein, 2005;Bond, 2007;Stahly, 2009;Wouters et al., 2011), although the phenomenon itself has been known since the isolation of quinhydrone more than 150 years ago (Wö hler, 1844). An interesting aspect of recent research on cocrystals, and indeed this was hinted at more than a decade ago when cocrystals came into the foreground, is that they may be less prone to form polymorphs than single-component crystals (Vishweshwar et al., 2005). This type of thinking possibly arose from the idea that cocrystal formation is only possible if very specific interactions between the two components are optimized and as such, these substances are less likely to form multiple crystal forms. Of course, such a contention can hardly be proved or disproved because it is, in Zaworotko's words, like 'proving the negative' (Almarsson & Zaworotko, 2004). However, there has always been an interest in this matter. Recently, one of us co-authored a report on two polymorphs of the 2:3 cocrystal of orcinol (5-methylresorcinol) and 4,4 0bipyridine, I and II (Tothadi et al., 2011). Subsequently, the present group of authors were able to isolate two more polymorphs, III and IV, and one 1:1 cocrystal, V, which might be termed pseudopolymorphs. Noting that it was quite unusual to obtain five crystal forms in a cocrystal system, a systematic investigation of these forms was initiated, in the context of the structural landscape. In the course of this study, it was noted that the five forms are related through some basic supramolecular synthons and this confers a certain element of modularity (Desiraju, 2010;MacGillivray et al., 2000) in these crystal structures. We have previously shown that the modularity of the supramolecular synthon is responsible for the successful transferability of charge density derived multipole parameters for structural fragments, thus creating a possibility for the derivation of charge density maps for new compounds, in effect opening up an opportunity for the large scale application of charge density maps as a general structural tool in crystal engineering. We termed this methodology the Supramolecular Synthon Based Fragments Approach (SBFA) . We also showed that the SBFA method is applicable not only to single-component crystal structures but also to two-component crystals, or cocrystals (Hathwar, Thakur, Dubey et al., 2011). The SBFA approach is applied here to the crystal forms in the present study, in other words to polymorphs of cocrystals. The purpose of the transferability was to quantify the various intermolecular interactions present in the different polymorphic forms of the crystal landscape of the multi-component system. In effect, the utility of transferability of multipole parameters among the robust synthons in the various polymorphic modifications in cocrystals is demonstrated. The link between charge density distribution associated with transferable synthons and the possible aggregation pathways indicated in the landscape offers a unique possibility to quantify intermolecular interaction energies associated with kinetically stable polymorphic forms.
Materials
Orcinol was purchased from Sigma Aldrich and 4,4 0 -bipyridine from Alfa Aesar and used without further purification. For the crystallization of all compounds, several stoichiometric ratios such as 1:1, 1:2, 2:1 and 2:3 were tried along with various crystallization methods such as solvent evaporation, sublimation and use of anti-solvent. After a week, good quality single crystals, which were suitable for the single-crystal diffraction experiments, were obtained. The ratio of the two compounds obtained in the crystal is not necessarily the ratio in which they are taken for the crystallization. Table 1 gives salient details of the cocrystals investigated in this study. Despite several attempts, it was not possible to obtain cocrystal I again. It may be noted that form III is obtained via sublimation, a technique that is not generally customary for multi-component crystals.
Data collection and structure refinement details
Routine data sets for compounds II, IV and V were collected at 100 K on an Oxford Xcalibur diffractometer with research papers IUCrJ (2014). 1, 8-18 a microfocus X-ray source (Mo K), equipped with a Cryojet-HT nitrogen gas-stream cooling device. The variabletemperature data sets for III were collected at 293, 200, 160, 140 and 120 K. In all these cases, data were processed with CrysAlisPro (Oxford Diffraction, 2011). Structure solution and refinements were performed with SHELX2012 (Sheldrick, 2008) using the WinGX suite (Farrugia, 2012).
2.3. High-resolution charge density data collection and structure refinement details of 4-hydroxybenzoic acid:isonicotinamide cocrystal These data provide the required O-HÁ Á ÁN synthon data entry into the in-house library which can be used for the subsequent analysis of the polymorphs of orcinol:bipyridine. Data were collected on a single crystal of reasonable size and quality (as was examined under a polarizing microscope) which was affixed to a Hampton Research cryoloop using Paratone-N oil. The crystal was cooled to 100 K with a liquid nitrogen stream using an Oxford cryosystems N 2 open-flow cryostat. High-resolution X-ray data up to (sin/) max = 1.08 Å À1 with redundancy ($ 14) and completeness ($ 100%) were collected on a Bruker Kappa Apex II CCD diffractometer using Mo K radiation at 100 K. Data collection strategies were generated using the COSMO module of the Bruker software suite (Bruker, 2006). The crystal-to-detector distance was fixed at 40 mm and the scan width was 0.5 per frame during the data collection. Cell refinement, data integration and reduction were carried out using the SAINTPLUS program. Numerical absorption correction was done by crystal face indexing. Sorting, scaling and merging of the collected data sets were carried out using the SORTAV program (Blessing, 1997). The crystal structure was solved by direct methods and refined in the spherical-atom approximation using SHELXL2012 (Sheldrick, 2008) from the WinGX suite (Farrugia, 2012). The crystallographic information and multipole refinement details are provided in the supporting information.
Transferability of multipole parameters using the SBFA
Polymorphs of orcinol (5-methylresorcinol) and 4,4 0 -bipyridine studied in the present work were divided into chemically reasonable molecular fragments based on their supramolecular environments (supramolecular synthons; Fig. 1). The refined multipole parameters (P val , P lm , and 0 ) present in the in-house library of experimental charge density data sets were used for
Figure 1
Logical fragments based on supramolecular synthons (color shaded) in forms II through to V. Notice the brown synthon which consists only of a weak C-HÁ Á ÁN interaction. These fragments may be transferred from structure to structure to generate a 'synthetic' charge density map.
SBFA transferability to all these target molecules. Scaling and initial refinement of the positional and displacement parameters of all atoms were carried out using the XD2006 package (Volkov et al., 2006). The H atoms were fixed to neutron values and the anisotropic displacement parameters of H atoms were computed using the SHADE2 server (Madsen, 2006;Munshi et al., 2008). Charge neutralization was obtained by fixing the individual atomic monopole to neutral atom values, followed by the refinement of atomic monopoles for all atoms which allowed realistic atomic charge values to be obtained. All other multipole parameters including and 0 were kept fixed during the refinements.
Theoretical evaluation of charge density to authenticate the multipole parameters derived from SBFA
Single-point periodic quantum mechanical calculations at the B3LYP/6-31G(d,p) level were carried out using CRYSTAL09 (Dovesi et al., 2009) with the neutron-normalized geometries obtained from experimental structure refinement. The shrinking factors (IS1, IS2 and IS3) along with the reciprocal lattice vectors were set to 4 (30 k-points in the irreducible Brillouin zone). The bi-electronic Coulomb and exchange series values for the truncation parameter were set as ITOL1 À ITOL4 = 8 and ITOL5 = 17, respectively, for the CRYSTAL09 calculations. The level shifter was set to 0.7 Hartree per cycle. The self-consistent field convergence limit was chosen as $ 10 À7 Hartree. The cohesive energy calculation was performed in all cases and the Grimme dispersion corrections along with the basis set superposition error corrections were included in the calculations. For a definition of the cohesive energy and details of its calculation refer to the supporting information. Theoretical structure factors obtained from the CRYSTAL09 single-point calculations for II and III were used in the multipole refinements using the XD software package (Volkov et al., 2006). Molecular geometry and the atomic displacement parameters for all atoms were kept fixed throughout the multipole refinement of the static model. Refinements and analysis of the theoretically obtained charge density data were performed with an unrestricted multipole model to compare the results from the transferred SBFA model. The purpose of the theoretical modeling in the above two cases was to benchmark the quality of SBFA modeled densities.
Analysis of crystal forms
All five solid forms of orcinol-bipyridine are characterized by O-HÁ Á ÁN hydrogen bonds between the two components (Fig. 2). These hydrogen bonds form a finite divergent pattern (synthon A) that consists of two orcinol molecules and three bipyridine molecules, seen in form I, or a closed convergent pattern (synthon B) that consists of two orcinol and two bipyridine molecules, seen in the related forms II through to V. The latter pattern was first identified by MacGillivray in his extensive studies of solid-state topochemical reactions of phenol-pyridine cocrystals (Gao et al., 2004;MacGillivray et al., 2008;MacGillivray, 2008). Because synthon B is a zerodimensional entity, it is possible that it persists in solution. Still, both divergent and convergent possibilities seem to be efficient molecular arrangements that use four O-HÁ Á ÁN hydrogen bonds each. These assemblies are further supported by weak intermolecular interactions such as C-HÁ Á ÁN, C-HÁ Á ÁO, C-HÁ Á Á and Á Á Á interactions.
The structure of form II is closely related to that of the new monoclinic form III (P2 1 /n), which is also a 2:3 cocrystal. The tetramer synthons B (O-HÁ Á ÁN, 1.75 Å , 175.2 ; 1.76 Å , 175.3 ) sandwich the free bipyridine in nearly the same manner. This larger assembly consisting of two tetramers and the sandwiched bipyridine is termed a Long Range Synthon Aufbau Module (LSAM) (Ganguly & Desiraju, 2008, 2010. The LSAM is a late synthon and Fig. 3(a) shows that the LSAMs in forms II and III are exceedingly similar. The advantage in differentiating between small and large synthons lies in the fact that the small synthons do not serve to distinguish well between polymorphs -the larger synthons include degrees of structural detail that permit such an exercise. In other words, the dissimilarity between the forms pertain not to the hydrogen bonding itself but rather to the arrangement of the LSAMs with respect to the crystallographic axes (Fig. 3b). All this clearly indicates that at a molecular recognition level (small supra- molecular synthons) both forms are nearly the same, but as the molecular assembly becomes increasingly larger, the forms become different (Fig. 3b). Incidentally, orcinol molecules are nearly parallel to the crystallographic b-axis in form III. Taken with the mutually perpendicular arrangement of orcinol and bipyridine molecules, this leads to a pseudo P " 1 1 character for the P2 1 /n structure (Fig. 3c) (Sarma & Desiraju, 1986).
We collected single-crystal data of form III at five different temperatures and this showed evidence of a reversible phase transition between 140 and 160 K (Fig. 4). The structural details are given in the supporting information. After cooling through the phase transition, form III provides a new low-temperature crystal structure, form IV, which is modulated along the unique axis so that it is nearly three times the value of that in form III (Fig. 5) It may be noted that the molecular and packing changes in the III ! IV transition are subtle. The conformations of the orcinol and bipyridine are largely the same as the corresponding molecular orientations along the unique axis (Fig. 5). The relationship between modulated structures has traditionally been understood in terms of relaxation of symmetry; a translation becomes a pseudo-translation and so on. In the context of the structural landscape, it may be suggested that this relaxed structure represents events that occur later in the reaction coordinate for crystallization. Table 2 shows that the low temperature form IV is more dense and better packed than form III. The relaxation of symmetry allows for a better packing and is in keeping with the idea of a landscape that is a Thermodynamic (DSC) profile of forms II, III and V. Low temperature reversible III ! IV phase transition (inset).
Figure 5
Orcinol-bipyridine: structure along the unique axis (shown in red) in forms III (left) and IV (right). Notice the triple modulation in the low temperature form.
profile of the energy events during crystallization. Fig. 6 shows the positions of the centers and pseudo-centers of inversion in form IV especially with respect to synthon B. In this figure, the symmetry designation of molecules is color coded.
In form V (P2 1 /c), the same type of hydrogen bonding (synthon B) is observed that is seen in forms II through to IV. , 169.4 ). However, form V is different from forms II through to IV in that the 'free' bipyridine is missing. We suggest that synthon B can develop into structures II or III by picking up a free bipyridine, or that alternatively it can nucleate and grow as form V so that it is effectively a branch point in the landscape from which the crystallization events can proceed in two entirely different ways, depending on the experimental conditions.
The thermal profile of forms II, III and V in Fig. 4 shows that while form II has a clean single endotherm, the differential scanning calorimetry (DSC) of form III indicates some degree of conversion to form II and possibly the existence of form IV or some other uncharacterized form. Form V is in any case less stable and shows a broad endotherm lower than any of the other forms. Form III is accessible on the landscape and leads to other forms.
Supramolecular synthon based fragments approach (SBFA) for the compounds in this study and their relative stabilities
The effectiveness of the SBFA method for transferability of multipole charge density parameters is due largely to the ability of the supramolecular synthon to act mechanistically as a modular unit. The electronic features of the synthon may be moved from one structure to another in the charge density analysis; such a procedure provides detail over and above what is obtained at the atomic and covalent bond level in the construction of 'synthetic' charge density maps, thereby giving a fine degree of agreement between theory and experiment (Hathwar, Thakur, Dubey et al., 2011;. In the context of the crystal landscape, the experimental crystal structures of the compound in question are generally mapped using the computational approach of Crystal Structure Prediction (CSP) (Sarma & Desiraju, 2002;Neumann et al., 2008), in which possible crystal structures are predicted based on the energy-density profile. In spite of recent advances both in the algorithms as well as in increasing computational power, CSP of a multi-component system is still a challenge. Further, CSP protocol only takes into account the thermodynamic factors associated with packing, geometry optimization and clustering. It usually does not consider kinetic factors which are involved during the course of crystallization events. In order to fill this conceptual gap, experimental as well as theoretical charge density methods could be used (Koritsanszky & Coppens, 2001) which provide an energy profile of the immediate molecular vicinity because they address the system in terms of individual interactions. However, these rigorous methods have several hurdles such as the fact that very good crystals are needed, which diffract at high resolution (sin/ ! 1.0), that the structures should be free from disorder and modulation, and that limitations like extinction and absorption should not be present. In the present study the multi-component orcinol-bipyridine system is both modulated and disordered and a rigorous charge density study is difficult. The utility of the transferable pseudo-atom databases approach in such situations is documented (Pichon-Pesme et al., 1995;Domagala et al., 2012;Dittrich et al., 2006Dittrich et al., , 2013Dominiak et al., 2009;Volkov et al., 2007); we have used the SBFA protocol which we have developed and which is well suited to this situation. (2:3) asymmetric unit. ‡ The calculation has been described in the supporting information. § Data collection at 150 K.
Figure 6
Arrangement of center and pseudo-center related synthons in form IV. Color coding is based on symmetry equivalence: blue: center of inversion; red: pseudo-center of inversion. Molecules of the same color are related by symmetry.
Forms II through to V were accordingly quantitatively rationalized with SBFA. Based on the structural description it is clear that the robustness and modular nature of hydrogen bonds associated with synthon B are the critical factors in applying SBFA to forms II through to V. This is quantified via multipole parameters of the O-HÁ Á ÁN hydrogen bonds derived from high-resolution X-ray diffraction data. The transferability of the multipole parameters of the O-HÁ Á ÁN hydrogen bonds in synthon B, and other interactions, in forms II through to V, generate charge density maps and provide the quantitative insights of electronic distribution in the intermolecular region through their topological parameters.
The multi-component systems were divided, as described previously (Hathwar, Thakur, Dubey et al., 2011), into logical fragments based on their synthons, which involve both strong and weak interactions (Fig. 1). Multipole parameters for the strong hydrogen bonds (O-HÁ Á ÁN) present in all the structures were taken from the experimental data of the 4-hydroxybenzoic acid:isonicotinamide (4HBA:INA) cocrystal (Vishweshwar et al., 2003), while the weaker ones C-HÁ Á ÁO, C-HÁ Á ÁN, C-HÁ Á Á were chosen from an in-house library of synthons. The synthesized charge density features from SBFA were visualized through their deformation and Laplacian plots, which were in agreement with multipole refinements done on structure factors obtained from high-level densityfunctional theory calculations in CRYSTAL09 (Dovesi et al., 2009) (Fig. 7). The topological analysis of the intermolecular region was performed using the Quantum Theory of Atoms in Molecules (QTAIM), resulting in the location of bond-critical points between the strong as well as the weak interactions present in the crystal structures. The comparison was restricted to only forms II and III as our purpose was to verify the validity of the transferred model so that we could gain confidence in proceeding with the two other forms. In forms IV and V, which are more complex, the synthesized features were not compared with their theoretical values and were taken as they are. For form I we felt that the exercise itself was unfeasible because of the non-reproducibility of the form, as well as the slightly poor data quality of the already reported structure.
In form II, the topological analysis confirmed that O-HÁ Á ÁN is the strongest interaction in the crystal structure. The remaining interactions present in the structure reflect their strengths in terms of their lower values of and r 2 . Comparison of the topological parameters between the SBFA model and theory deviates in only certain regions particularly for the strong O-HÁ Á ÁN hydrogen bonds. The observed deviation in the Laplacian can be explained based on our previous work on the carboxylic dimer synthons, where it was attributed to the elongation of the O-H bonds. The comparable values between SBFA and theoretical topological parameters of covalent bonds and other weak intermolecular interactions support the validity of the transferred model (Table 3). The comparison of form III with theory and the topological parameters of forms IV and V have been summarized in the supporting information. Table 3 Numerical (top) and graphical (bottom) comparison of the topological parameters, SBFA and theory (italics), of form II.
For the calculation of binding energies, it was also found convenient to define a molecular shell based on a coordination envelope around the asymmetric unit. In practice, all molecules that are found 0.2 Å beyond the van der Waals surface of the asymmetric unit (defined in terms of the closest group of bipyridine and orcinol molecules) constitute the shell. The shell consists of O-HÁ Á ÁN hydrogen bonds, C-HÁ Á ÁN and C-HÁ Á ÁO hydrogen bonds and several weak CÁ Á ÁC interactions. A critical-point search within the shell provides all the required contacts to be considered in the calculation. The Espinosa-Molins-Lecomte method (Espinosa et al., 1998) was used to calculate the interaction energy (E int ) using the Abramov expression (Abramov, 1997) which gives the kinetic and the potential energy density at the bond-critical points. The magnitude of the energy obtained by this approach is an indicator and not an absolute value, and hence a direct comparison with the values obtained from periodic DFT calculation may not be appropriate.
The binding energy, that is the energy of a molecular shell, was used to compute the relative stabilities of the different forms using the EML method (Nelyubina et al., 2010). This was compared with the cohesive energy calculations performed using CRYSTAL09. In single-component crystals with Z 0 = 1, the treatment of the cohesive energy is straightforward. When Z 0 > 1 the computations are more involved but still manageable. In multi-component systems, however, the complexity of the calculations increases to a level that is unworkably tedious. In our system, the various forms do not even have the same Z 0 values and one of them is also a pseudopolymorph. In such a scenario, the SBFA and EML methods provide a simplified method for calculation of energies, and this is a distinct advantage. The CRYSTAL09 and EML methods also have a slightly different physical interpretation which will be more clearly outlined in the next section.
The energies reported in Table 2 are obtained with CRYSTAL09 and EML, where the energy E coh corresponds to the energy of the defined asymmetric unit. The calculated energies cannot be used directly because the volume of the asymmetric unit is different in the five forms. Still, we attempted the quantification of forms I through to IV as they have the same 2:3 stoichiometric ratio. Even here we need to normalize the values because the number of molecules in the asymmetric unit is different in each of the forms. In this way, we find that form I is the least stable, and that forms II and III are equienergetic. It was conjectured that the energy values are biased considerably by the complexity of the cocrystal formation and the values of Z 0 and hence another calculation based on energy per molecule was performed.
Structural landscape
The concept of the landscape follows naturally from the phenomena of polymorphism (Bernstein, 2002) and pseudopolymorphism and is conveniently applied to mono-component systems. In two-component systems, like the present case, it seems natural to assume that the earliest stages of recognition (smallest synthons) are heteromolecular in nature, for how else would a two-component system be obtained? The very fact that a two-component crystal AB is even obtained suggests that one or more interactions of the type AÁ Á ÁB are better than any of the interactions of the type AÁ Á ÁA or BÁ Á ÁB (Sarma & Desiraju, 1985). In turn, these stable AÁ Á ÁB heterosynthons permute themselves in different ways to give various polymorphs, so that one is, in effect, traversing the landscape. It may be supposed that the simple synthon B associates with other such synthons in solution without any symmetry constraints, and as crystallization becomes more enthalpy controlled, these clusters (aggregates, shell) gradually approach the final configurations as seen in forms II and III (more symmetry constraints), and forms IV and V (less symmetry constraints). In effect, the small synthons represent a certain 'irreversible' point and all subsequent events follow from it.
Structures II through to V may be understood as representing alternative arrangements of modules, that we have termed LSAMs. At this point, it is worth noting that the LSAM, as we have defined, has some similarity with the 'growth unit' as defined by Davey (Davey et al., 2002) with the caveat that Davey's growth unit may also incorporate solvent. The similarity stems from the fact that all these species are pre-nucleation entities. The modularity of LSAMs permits an analysis of charge density data in terms of contributions from structural fragments that are treated in a similar way to the pseudo-atoms in the classical charge density transferability approaches. The modularity of these structural fragments also helps in their analysis from the viewpoint of the structural landscape. Modularity is the key link that connects all the structures that constitute the landscape.
If there are two steps in the late stages of crystallization, finite strands of alternating bipyridine (three molecules) and orcinol (two molecules), which we have defined as synthon A, crystallize, in the first step, to give form I with O-HÁ Á ÁN hydrogen bonds in the distance range 1.75-1.81 Å and an angle range 164-179 . This is shown in Fig. 8(a). It is not at all difficult to conceive a process in which successive O-HÁ Á ÁN hydrogen bonds are made and broken (Fig. 8b) and orcinol molecules rotate nearly 180 so that the structure evolves into that of form II with its closed synthon (Fig. 8c). In this scenario, form I would be a kinetic form. Although not proof, the fact that we were unable to obtain form I again would hint that it is also metastable. To summarize, there are two aggregation possibilities of the LSAMs. In both possibilities the third bipyridine is an active participant either being hydrogen bonded (form I) or facilitating close packing (forms II through to IV). Form V is different and develops independently from synthon B not requiring the free bipyridine. This is shown in Fig. 9.
During navigation of the nucleation pathway of forms II through to IV, the system exploits the modular nature of synthon B as a means of achieving the structures based on their energies. As we have already discussed, forms II and III are similar at the smaller aggregate, and this suggests that up to a certain point the modular unit follows the same nucleation path, but as the aggregation level becomes larger and larger there are shifts leading to a choice between two different pathways -finally this results in two different crystal structures. The quantitative analysis leading to the estimates of interaction energies using two different approaches (CRYSTAL09 and SBFA/EML) brings out the significance of the regions of the landscape between forms II and III. In practical terms, both forms appear to be practically equivalent in that they lie in nearly the same region of the energy-density plot. On the other hand, form II shows slightly better packing than form III, and after the phase transition, form III converts into form IV which is packed as efficiently as form II (Table 2). This emergence of better packing is the result of the relaxation in the symmetry constraints in form III. In the context of the landscape, such behaviour is interesting. Forms II and IV have nearly the same densities, energies and packing efficiencies but they represent entirely different pathways in the landscape, and their structures are also quite different. The relative stabilities of forms II, III and IV with similar asymmetric units are selected below. least stable ! most stable III<IV<II CRYSTAL09 III<IV<II SBFA=EML It is unlikely that forms II and IV can interconvert easily, but they have the lowest energies among the forms isolated and studied here. Which is the global minimum? Is there another, yet undiscovered form, which is of even lower energy? If not, is it fair to speak of two independent crystallization pathways, each leading to a stable outcome? In this case, how relevant were early versions of the CSP blind tests that demanded only the top three choices for a molecule or a cocrystal? Information, such as is obtained here, regarding various possibilities of nucleation pathways in the reaction coordinate during the course of crystallization, such as the existence of form III, and which finally leads to form IV, which seems to be just as favourable as form II, conveys that there need not be just one structure at the global minimum. More than ever, there is now a compelling feeling that the concept of 'crystal structure' is not unique. The comment needs emphasis: rather than speak of the crystal structure of a molecule, a term that may have only a limited meaning in the landscape context, it may be fairer to speak of a crystal structure of a molecule (in this case a molecular system, because we are dealing with cocrystals). A particular structure is just one of many.
During the solution ! crystal pathway, there could be various metastable forms and structural fragments which may not appear in the final crystal structure. These metastable structures are the outcome of crystal synthesis because of the competition between kinetic and thermodynamic factors associated with the transition from an entropy dominated scenario in solution to an enthalpically determined crystal. These (metastable) crystal structures (known as polymorphs or pseudopolymorphs) which encapsulate thermodynamic factors as well as energy-density profiling constitute a large landscape which may be accessed either by the hydrogen-bond hierarchies during the solution to the supersaturated state or via thermal transformations in the final stages of crystallization. In solution, molecules recognize one another based on their complementary hydrogen-bonding functionalities which define the basic kinetic units of the crystal structure, namely supramolecular synthons. Even as the hydrogen-bond hierarchies are established, and based on their respective energies, synthons optimize themselves right from the initial to intermediate to final stages of crystallization until chemically (kinetic) or geometrically (thermodynamic) reasonable structures are obtained for the organic compound in question. This is the final stage of crystallization in which the dichotomies between synthon versus close packing become fully manifest. Alternatively, one might reduce the molecule ! crystal progression in the landscape into a discussion of packing (thermodynamic aspects) and synthon theory (kinetic aspects). We would urge an appreciation of both these viewpoints in the current scenario (Kitaigorodskii, 1973;Dunitz & Gavezzotti, 2005;Desiraju, 2007).
Conclusions
This work addresses several questions that highlight the difficulties faced in dealing with complex systems in crystal engineering. Our approaches have posed questions that future methodologies should hopefully address. We have shown that the closed zero-dimensional convergent phenolÁ Á Ápyridine synthon (synthon B) is robust and constitutes the element of modularity that causes extensive polymorphism in the orci-nolÀbipyridine system. Within the context of the present example, it is therefore not possible to state that cocrystal formation decreases the likelihood of polymorphism. It is true that the major interaction, namely O-HÁ Á ÁN, is conserved in all the forms but the polymorphism is caused by a variation in the more minor interactions, in other words in the ways in which synthon B modules are arranged with respect to one another. Larger assemblies of synthon B may be termed as Long Range Synthon Aufbau Modules or LSAM. This work also shows that this collection of polymorphs of orci-nolÀbipyridine constitutes a landscape which may be studied by an energy profiling and interaction profiling, both of which may be carried out with charge density studies, more particularly with our newly suggested technique of Supramolecular Synthon Based Fragments Approach (SBFA) for transferability of multipole parameters. The synthon is a modular structural unit that lends itself particularly well to the transferability of electron density information in a crystal structure. The polymorphs of orcinolÀbipyridine are structurally complex in a manner that renders them problematic for other methods of charge density analysis and our method offers some choice in this regard. The idea of a structural landscape that is defined by polymorphs, solvates and computed structures provides an indication about events in the late stages of crystallization. While computational CSP inputs on the thermodynamics of these events, or vertical profiling of the landscape, charge density methods give information on the interactions themselves and therefore, in principle, can lead to horizontal profiling and a measure of the kinetics that underlie crystallization events because in the end it is the energy and distance dependence of individual interactions that determine actual crystallization pathways which are essentially kinetically governed. | 8,115.8 | 2013-10-01T00:00:00.000 | [
"Chemistry"
] |
Dynamical Gibbs-non-Gibbs transitions in the Curie-Weiss Potts model in the regime beta<3
We consider the Curie-Weiss Potts model in zero external field under independent symmetric spin-flip dynamics. We investigate dynamical Gibbs-non-Gibbs transitions for a range of initial inverse temperatures beta<3, which covers the phase transition point beta=4 log 2 [8]. We show that finitely many types of trajectories of bad empirical measures appear, depending on the parameter beta, with a possibility of re-entrance into the Gibbsian regime, of which we provide a full description.
Research context
The past years have seen progress from various directions in the understanding of Gibbs -non-Gibbs transitions for trajectories of measures under time-evolution, and also more general transforms of measures. The Gibbs property of a measure describing the state of a large system in statistical mechanics is related to the continuity of single-site conditional probabilities, considered as a function of the configuration in the conditioning. If a measure becomes non-Gibbsian, there are internal mechanisms which are responsible for the creation of such discontinuous dependence. This leads to the study of hidden phase transitions, which was started in the particular context of renormalization group pathologies in van Enter, Fernández, and Sokal [33].
Such studies have been made for a variety of systems in different geometries, for different types of local degrees of freedom, and under different transformations. Let us mention here time-evolved discrete lattice spins [19,30], continuous lattice spins [24,34], time-evolved models of point particles in Euclidean space [17], and models on trees [32]. For a discussion of non-Gibbsian behavior of timeevolved lattice measures in regard to the approach to a (possibly non-unique) invariant state under dynamics, see [16], for relevance of non-Gibbsianness to the infinite-volume Gibbs variational principle (and its possible failure) see [22,25]. For recent developments for one-dimensional long-range systems, and the relation between continuity of one-sided (vs. two-sided) conditional probabilities see [2][3][4]31].
In the present paper we are aiming to contribute to the understanding of Gibbs -non-Gibbs transformations for mean-field models, in the sense of the sequential Gibbs property [6, 9-11, 14, 15, 18, 21]. Usually there is a somewhat incomplete picture for lattice models, due to the difficulty to find sharp critical parameters. Mean-field models on the other hand are often "solvable" in terms of variational principles which arise from the large deviation formalism, while the remaining model-dependent task to characterize the minimizers and understand the corresponding various bifurcations can be quite substantial. We choose to work for our problem in the so-called two-layer approach, in which one needs to understand the parameter dependence of the large-deviation functional of a conditional first-layer system. In this functional the conditioning provides an additional parameter given by an empirical measure on the second layer. This is more direct than working in the Lagrangian formalism on trajectory space, which would provide additional insights on the nature of competing histories that explain the current state of the system at a discontinuity point [9,20,28,29].
Compared to the Curie-Weiss Ising model, the Fuzzy Potts model and the Widom-Rowlinson models, we find in the present analysis of the time-evolved Curie-Weiss Potts model significantly more complex transition phenomena, see Theorem 2 and Figure 2. This has to be expected as already the behavior of the fully non-symmetric static model is subtle [23]. It forces us to make use of the computer for exact symbolic computations, in the derivation of the transition curves (BU, ACE and TPE in Figure 2, discussed in Sects. 4.4, 4.5 and 4.6), along with some numerics for our bifurcation analysis. We believe that these tools (see page 42) may also be useful elsewhere. Now, our approach rests on singularity theory [1,5,12,13,27] for the appropriate conditional rate functional of the dynamical model. This provides us with a four-parameter family of potentials, for a two-dimensional statevariable taking values in a simplex. It turns out that the understanding of the parameter dependence of the dynamical model is necessarily based on the good understanding of the bifurcation geometry of the free energy landscape of the static case for general vector-valued fields [23]. In that paper, which generalizes the results of Ellis and Wang [8] and Wang [35], we lay out the basic methodology. Therein we also explain the phenomenology of transitions (umbilics, butterflies, beak-to-beak) from which we need to build here for the dynamical problem.
As a result of the present paper we show that the unfoldings of the static model indeed reappear in the dynamical setup, and acquire new relevance as hidden phase transitions. It is important to note that, in order for this to be true, we have to restrict to mid-range inverse temperatures β < 3. More work has still to be done to treat the full range of inverse temperatures for the dynamical model, where more general transitions seem to appear for very low temperatures. For the scope of the present paper, it is this close connection between the static model [23] in fully non-symmetric external fields, and the symmetrically time-evolved symmetric model in intermediate β range, which is really crucial to unravel the types of trajectories of bad empirical measures of Theorem 2. It would be challenging to exploit whether an analogous non-trivial connection, that we observe for our particular model, holds for more general classes of models. This clearly asks for more research.
Overview and organization of the paper
In the present paper we study the simplest model which is, together with its time-evolution, invariant under the permutation group with three elements: We consider the 3-state Curie-Weiss Potts model in zero external field, under an independent symmetric stochastic spin-flip dynamics. Based on previous examples [21], one may expect loss without recovery of the Gibbs property for all initial temperatures lower than a critical one (which then may or may not coincide with the critical temperature of the initial model), and Gibbsian behavior for all times above the same critical temperature. We show that this is not the case for our model, and the behavior is much more complicated: The trajectories of the model show a much greater variety, depending on the initial temperature. We find a regime of Gibbs forever (I), a regime of loss with recovery (II) and a regime of loss without recovery (III). Figure 1 shows the non-Gibbs region in the two-dimensional space of initial temperature and time. The boundary of this non-Gibbs region consists of three different curves which correspond to exit scenarios of different types of bad empirical measures. Bad empirical measures are points of discontinuity of the limiting conditional probabilities as defined in Definition 1. Under the time evolution t ↑ ∞ (or equivalently g t ↓ 0 given by (4)) the system moves along vertical lines of fixed β towards the temperature axis. Intersections with a finite number of lines occur along this way, which are responsible for the transitions described in our main theorem, Theorem 2. These additional relevant lines are shown in Figure 2. Theorem 2 rests on the understanding of the structure of stationary points of the time-dependent conditional rate function given in Formula (9) via singularity theory. It turns out that the bifurcations we encounter for general values of the four-dimensional parameter (α, β, t) ∈ ∆ 2 × (0, ∞) × (0, ∞) (see (6)) are of the same types as for the static model depending on a three-dimensional parameter. However, this holds only if we restrict to mid-range inverse temperatures β < 3 and to endconditionings α taking values in the unit simplex (and not in the full hyperplane spanned by the simplex). Nevertheless, in order to understand the relevant singularities, the analysis is best done by first relaxing the probability measure constraint on the parameter α and allow it to take values in the hyperplane. The analysis proceeds with a description of the bifurcation set, where the structure of stationary points of the conditional rate function changes, and the Maxwell set, where multiple global minimizers appear. To pick from these transitions the ones which are relevant to the problem of sequential Gibbsianness and visible on the level of bad empirical measures, we have to take the probability measure constraint for α into account. This step is neither necessary in the static Potts nor in the dynamical symmetric Ising model. The lines Symmetric cusp exit (SCE), Asymmetric cusp exit (ACE), Triple point exit (TPE) and Maxwell triangle exit (MTE) depicted in the full phase diagram in Figure 2 are examples of such exit scenarios. For those lines there is an exit of a certain particular critical value of α from the unit simplex (observation window). The detailed dynamical phase diagram in Figure 2 shows more information about the transitions during time evolution. Preliminary investigations show that the structural similarity with the static case may no longer be valid in the regime β > 3. Therefore we leave the region of very low temperatures for future research.
We describe the model we are considering together with its time-evolution in Sect. 1.3 where we also define what we mean by Gibbsianness (or the sequential Gibbs property). In Sect. 2 we present our main theorem and describe the transitions of the sets of bad empirical measures as a function of the parameters β and t. We will establish the connection between the analysis of the potential function G α,β,t and the Gibbs property of the time-evolved model in Sect. 3. The analysis of the potential function using the methods of singularity theory is then carried out in the Sects. 4 and 5.
The model and sequential Gibbsianness
We consider the mean-field Potts model with three states in vanishing external field under an independent symmetric spin-flip dynamics. The space of configurations in finite-volume n ≥ 2 is defined as Ω n = {1, 2, 3} n and the Hamiltonian of the initial model is So at time t = 0 the distribution of the model is given by We consider a rate-one symmetric spin-flip time-evolution in terms of independent Markov chains on the sites with transition probabilities from state a to b where We are interested in the Gibbsian behavior of the time-evolved measure The unit simplex contains the empirical distributions of spins. By Gibbsian behavior we mean the existence of limiting conditional probabilities in the following sense.
exists for every family η n,k ∈ {1, 2, 3} with n ≥ 2 and 2 ≤ k ≤ n such that We call α bad, if it is not good. The model µ β,t is called sequentially Gibbs if all α in the unit simplex ∆ 2 are good points.
Dynamical Gibbs -non-Gibbs transitions: main result
Our main result on the dynamical Gibbs -non-Gibbs transitions in the high-tointermediate temperature regime for the initial inverse temperature β < 3 is as follows. This temperature regime ranges from high temperature, covering the phase transition temperature (Ellis-Wang inverse temperature β = 4 log 2), up to the elliptic umbilic point β = 3 (where the central stationary point of the time-zero rate function in zero external field changes from minimum to maximum). Essential parts of the structure of the trajectories of dynamical transitions as a function of time t in the regime β < 3 remain unchanged over the three inverse-temperature intervals I, II and III, which were already visualized in Figure 1. The type of transitions can be understood as deformations of the sequences of transitions found in the static Potts model in general vector-valued fields analyzed in [23], where in that case only the one-dimensional parameter β was varied. Observe that however, the dynamical transitions we describe here, do not necessarily occur in a monotonic order with respect to what is seen in the static model under temperature variation. This is for instance (but not only) apparent in the phenomenon of recovery of Gibbsianness. At very low temperatures (β > 3) different bifurcations seem to occur which will be left for future research. While reading the following theorem it is useful to have Figure 2 in mind as the inverse temperatures and transition times are related to the lines depicted in the dynamical phase diagram. (1 -2) in zero external field, for initial inverse temperature β > 0 and at time t > 0 under the symmetric spin-flip dynamics (3 -5). Then the following holds. (ii) For β BE < β < 8 3 the bad empirical measures are given by three symmetric straight lines in a first time interval t NG (β) < t < t BU (β). For a second time interval t BU (β) < t < t TPE (β), the set of bad empirical measures consists of three symmetric Y-shaped sets not touching. For t TPE (β) < t < t ACE (β) the set of bad empirical measures consists of six disconnected arcs. For t > t ACE (β) the system is Gibbsian again.
Theorem 2. Consider the time-evolved Curie-Weiss Potts model given by
(iii) For 8 3 < β < β * and t NG (β) < t < t BU (β) the bad empirical measures consist of three symmetric straight lines. For t BU (β) < t < t TPE (β), the set of bad empirical measures consists of three Y-shaped sets not touching. For t TPE (β) < t < t B2B (β) the set of bad empirical measures consists of six disconnected arcs. For t B2B (β) < t < t MTE (β) the set of bad empirical measures consists of three disconnected arcs. For t > t MTE (β) the system is Gibbsian again. The inverse temperature β * is given by the intersection point of the two lines B2B and TPE in Figure 2.
(iv) For β * < β < 4 log 2 and t NG (β) < t < t BU The meaning and computation of these lines are discussed in Sects. 4 and 5. While only the three lines SCE, ACE and MTE appear as part of the boundary line of the non-Gibbs region, the other lines are relevant for structural changes of the set of bad empirical measures. There are lines which are explicit in the sense that they are given in terms of zeros of one-dimensional non-linear functions, for example, the entry time t NG (β) (formula (60)) or the butterfly unfolding time t BU (β) (Formula (72)). The least explicit lines are the MTE and TPE lines which involve a Maxwell set computation, the most explicit line is SCE which is given in parametric form s → (β(s), g t (s)) as described in Proposition 8. Figure 3 gives a graphical overview of the possible types of sequences of bad empirical measures with increasing time for the different temperature regimes. There is an even more detailed graphic that illustrates all the transitions involved in the bifurcation set as well as in the Maxwell set. You can find this graphic in the electronic supplemental material (ESM) under the filename detailed_overview.pdf.
II Recovery
III Loss without recovery ii) straight lines enter the simplex, become non-touching Y-shaped sets at the butterfly transition time t BU (β) and move out of the simplex. The midpoints of the Y-shaped sets exit at t TPE (β) and the set leaves the simplex completely at t ACE (β). In (II.iii) the midpoints of the Y-shaped sets leave the unit simplex at t TPE (β) and the two respective arcs connect at the beak-to-beak transition time t B2B (β). The remaining three arcs move towards the corners and leave the unit simplex at t MTE (β). The exit of the midpoints of the Y-shaped sets and the connection of the six arcs occurs in reversed order in the next row (II.iv). In (III) the central triangle shrinks to a point and forms the star-like set that remains in the simplex forever.
Infinite-volume limit of conditional probabilities
The existence of the infinite-volume limit of the conditional probabilities, that is, the question of sequential Gibbsianness, can be transformed into an optimization problem of a certain potential function. As the parameters (β, t) are fixed throughout this section let us write µ n for the measure µ n,β,t .
has a unique global minimizer, then α is a good point, that is, the infinite-volume limit of the conditional probabilities µ n (·|α n ) with α n → α exists independently of the choice of (α n ).
The idea of the proof goes as follows: We can rewrite the conditional probabilities µ n (·|α n ) in terms of an expected value with respect to a disordered mean-field Potts modelμ n (see Lemma 4). Thus, we have to study the weak convergence of L n , where L n is the empirical distribution of the spins σ 2 , . . . , σ n . Note that this is equivalent to the weak convergence of can prove the theorem by an asymptotic analysis of integrals of the form as was done by Ellis and Wang [8]. So it suffices to prove the Lemmata 4 and 5.
A point is good if the respective random field model shows no phase transition, that is, the law of large numbers holds. To be precise, we have the following representation: Lemma 4. The finite-volume conditional probabilities are given by andμ n is a quenched random field Potts model Proof. The proof follows from explicit computations with conditional probabilities.
This representation of the conditional probabilities transforms the problem of understanding bad points to the analysis of disordered mean-field models and their phase transitions. This analysis is done using the Hubbard-Stratonovič transformation which is successfully used for many models [7,8,21].
Lemma 5. Write
for the empirical measure of n − 1 spins with lawμ n [η 2 , . . . , η n ] • L −1 n . Furthermore, let W be a standard normal random vector independent of L n . The distribution of W/ β(n − 1) + L n has a density proportional to e −(n−1)G αn,β,t with respect to Lebesgue measure.
Proof. Denote by σ 2 , . . . , σ n independent {1, 2, 3}-valued random variables each distributed according to p t (dσ i , η i ) with a fixed boundary configuration η 2 , . . . , η n with empirical measure α n . We denote the expectation with respect to this distribution by E. Then in order to calculate the distribution of we calculate for every bounded continuous function f the expectation Now we apply the transformation m = w/ β(n − 1) + L n and obtain In order to complete the proof, we have to calculate the expectation Now we take the logarithm to raise the expression back into the exponent again. So the expected value (16) of the bounded continuous function f is equal to the following up to a normalizing constant: We can now identify G αn,β,t in the exponent using that
Recovery of the Gibbs property
The regime β < 8 3 is split into three parts given by the intervals (0, β NG ], (β NG , β BE ] and (β BE , 8 3 ). In the first part we find that the model is sequentially Gibbs for all times t > 0 whereas in the other two parts the system recovers from a state of non-Gibbsian behavior. The driving mechanism in this "recovery regime" is due to the butterfly singularity which is already found in the static model [see 23, Sect. 2.4.1]. However, in contrast to the static model the bifurcation set might leave the unit simplex so that in order to answer the Gibbs -non-Gibbs question the location of this set (and the contained Maxwell set) with respect to the unit simplex is also important.
Elements from singularity theory
In order to investigate the Gibbs -non-Gibbs transitions we have to study the global minimizers of the potential G α,β,t (Theorem 3). We will use concepts from singularity theory to derive and explain our results.
Singularity theory allows us to understand how the stationary points of the potential change with varying parameters. This can be achieved by looking at the geometry of the so-called catastrophe manifold, which contains the information about the stationary points of the potential for every possible choice of parameter values. More precisely, it consists of the tuples (m, α, such that m is a stationary point of G α,β,t given by (9). The bifurcation set consists of those parameter values (α, β, t) in ∆ 2 × (0, ∞) × (0, ∞) such that there exists a degenerate stationary point m in R 3 , that is, a point at which the Hessian has a zero eigenvalue. The parameter values of the bifurcation set give rise to a partition of the parameter space whose cells contain parameters at which the number and nature of stationary points do not change. Although we are only interested in α that are bad empirical measures, hence probability measures, it is convenient to loosen this constraint and consider α in the hyperplane H = {m ∈ R 3 |m 1 + m 2 + m 3 = 1} into which the unit simplex is embedded. The following proposition is the basis for the analysis of the bifurcation set.
Then we have the following: (a) Let ρ be any permutation of {1, 2, 3}. Then where we interpret the permutation ρ as a 3 × 3-matrix and M as a column vector. For example, if for two distinct elements a, b of {1, 2, 3}.
(c) The catastrophe manifold of the HS-transform G α,β,t is the graph of the map (m, β, t) → α = χ(m, β, t) given by for m ∈ R 3 . In these coordinates, the β-scaled simplex β∆ 2 is an equilateral triangle in the (x, y)-plane centered at the origin. The Hessian matrix G α,β,t (m) in these coordinates is in block diagonal form: The set of degenerate stationary points is given by the solutions (m, β, t) of the following equation: Before we present the proof, let us stress the importance of this proposition. The matrix Γ naturally appears in the derivatives of G α,β,t and has the two important properties: Firstly, the rows of Γ are probability vectors and secondly the map M → Γ(M, t) is compatible with the symmetry of the model. The fact that the catastrophe manifold is given as a graph allows us to write the bifurcation set as the set of (χ(m, β, t), β, t) such that We can therefore take the same point of view as in the static case [cf. 23, Lemma 3]: We study the zeros of the Hessian determinant as a function of m with β and t fixed. This is a twodimensional problem since we only have to consider points in the unit simplex ∆ 2 . Additionally, ∆ 2 is bounded so that we can simply compute the zeros of the Hessian determinant numerically on a discretization of ∆ 2 as accurately as we want to. In this way we can get insight into the global shape of the bifurcation set. It is convenient to look at this set as composed of the bifurcation set slices B(β, t), that is, the subsets for which the parameter (β, t) is fixed. Figure 4 shows an example of the zeros of the Hessian determinant together with the respective image under the map χ(·, β, t) for a fixed pair (β, t). We now continue with the proof of the above proposition.
Proof of Proposition 6. Let us prove the claims in increasing order. Fix arbitrary M ∈ R 3 and positive t. The following equation proves (22).
We proceed with the second point. Note that the matrix Γ(M, t) can be written as the product DE of the diagonal matrix D = (D a,b ) with entries for a, b ∈ {1, 2, 3} and the matrix Since det Γ(M, t) = det(D) · det(E) and the determinant of D is clearly positive, we have to check that det(E) is positive to see that Γ(M, t) is in the general linear group. We find that the determinant of E is given by χ(·, β, t) χ(·, β, t) The branches of the degenerate points on the left and their corresponding images under χ(·, β, t) on the right are marked with the same color. Note that despite the fact that the degenerate stationary points in the left plot lie inside of ∆ 2 in the right plot we see that parts of the bifurcation set slice lie outside of the simplex. This is a major difference to the static case.
which is clearly positive for all positive g t .
To prove the formula for the inverse, let a, b and d be pairwise different elements of {1, 2, 3}. Substituting the right-hand sides of (23-24), we have the following Adding the right-hand sides of the first three equations yields zero and adding those of the last two gives one. This proves the formula for the inverse.
We now prove that the catastrophe manifold is the graph of χ. First, let us check that the range of χ is indeed the hyperplane H. Take an arbitrary point (m, β, t) in H × (0, ∞) × (0, ∞) and let α = χ(m, β, t).
Since (1, 1, 1) T is an eigenvector of Γ(βm, t) for the eigenvalue 1, it is also an eigenvector of Γ −1 (βm, t) for the same eigenvalue. Therefore, we find so α is an element of H. Next, we show that the catastrophe manifold is the graph of χ. The differential of G α,β,t is given by Since Γ(βm, t) is invertible, the equation G α,β,t (m) = 0 can be solved for α and we find α = χ(m, β, t). Assume α is in ∆ 2 , then G α,β,t (m) = 0 implies that m also lies in ∆ 2 since 0 < Γ b,a (βm, t) < 1 for all b, a in {1, 2, 3}.
To show (27) and (28) observe that the second derivative of G α,β,t is given by the matrix where Γ = Γ(βm, t). The partial derivatives of Γ c,a are elements of the tangent space of ∆ 2 for every c in {1, 2, 3}, that is, summing over a yields zero. Therefore: Since the coordinate basis of the (x, y, z)-chart is an orthogonal basis, we find Since β > 0 and α = χ(m, β, t), the condition for degenerate stationary points det G α,β,t (m) = 0 is equivalent to equation (28).
Universality hypothesis connecting the mid-range dynamical model with the static model
In our work we are guided by the following universality hypothesis, which provides a useful organizing principle to understand the transitions which appear. It is suggested by the universality seen in local bifurcation theory, and verified for our model in the full set of mid-range temperatures β < 3, by means of our analytical treatment in the sequel of the paper, aided in some parts by computer algebra and numerics. There exists a map from the two parameters temperature and time of the dynamical model to one effective temperature parameter of the static model of the form which for our model is defined on the whole subset {(β, t) | 0 < β < 3, t > 0} of the positive quadrant (and not only locally) and this map has the following property. At fixed (β, t) the bifurcation set slice B(β, t) ⊂ ∆ 2 , in the space of endconditionings α for the dynamical model, is diffeomorphic to a subset of the corresponding bifurcation set slice B st (β st ) ⊂ ∆ 2 of the static model under a smooth (β, t)-dependent map (·, β, t), which we call the effective observation window, always contains the uniform distribution. However, it may be much smaller than ∆ 2 for some parameter values. In fact, this will happen as t ↑ ∞, as we will see. The map β st (β, t) from dynamical to static parameters is (only) uniquely defined on the critical lines EW, B2B and BU of the dynamical model (see Figure 2) which get mapped to the corresponding static values β st = 4 log 2, β st = 8 3 , and β st = 18 7 [see 23, Table 1]. The following conjecture underlies this hypothesis, as it expresses the structural similarity of dynamical and static rate functional, by means of a parameterdependent map acting on the state space ∆ 2 , compare with the definition of equivalent potentials in [27, Chapter 6, Section 1]. (c) For every (α, β, t) in D and every m in ∆ 2 the following identity holds:
Conjecture 7. There exists a set U which contains the unit simplex ∆ 2 and is open in the hyperplane H such that
where f β,α denotes the potential (5) where pr 1 denotes the projection (0, ∞)×∆ 2 → (0, ∞). In other words, the effective static inverse temperature β st does not depend on the dynamical α.
A comparison of Figure 9 with [23, Figure 5] gives evidence for the existence of the map ψ 1 as the bifurcation set slice of the static model looks structurally similar to the bifurcation set slice in a neighbourhood of the unit simplex of the dynamical model. The contour plots in the rightmost plots of the two figures support the existence of the map ψ 2 as the contour plot of the dynamical potential G α,β,t looks structurally similar to a subset of the contour plot of the static potential f βst(β,t),αst(α,β,t) . Note, however, that we are not going to construct the maps ψ 1 and ψ 2 in the following sections of the paper and we do not need to do it. Instead, we explicitly compute the critical lines from the dynamical potential following the ideas of singularity theory. This means that the lines can be found independently of the construction of the maps ψ 1 and ψ 2 . The behavior of the model in the vicinity of these lines follows from Thom's classification theorem [see 26, Section 5 of Chapter 3] and our global analysis is supported by the global numerical analysis of the relevant parts of the dynamical bifurcation set. In the following sections we now proceed with the discussion of the critical lines.
The symmetric cusp exit (SCE) line and the non-Gibbs temperature
The non-Gibbs inverse temperature β NG is defined as the supremum of all β such that µ n,β,t is sequentially Gibbsian for all positive t. It turns out to be a maximum. As the type of transitions of the dynamical model for mid-range temperatures can be understood in terms of the static case, let us remark that in the static Potts model the first type of bad magnetic fields that show up with increasing β are due to three symmetric cusp singularities, the "rockets"
(b) The solutions of the system (44-45) can be explicitly parametrized in the form
for s < 0.
(c) The non-Gibbs temperature is given via
Proof. Let us first prove item 1. A symmetric cusp point α is the image of a symmetric degenerate stationary point m under the map χ(·, β, t) at which the tangent vector of the curve of degenerate stationary points (given by vanishing Hessian determinant) is parallel to the direction of degeneracy. The partial derivatives of G α,β,t with respect to x and z vanish at m because of symmetry, so it is sufficient for a stationary point m to have a vanishing partial derivative with respect to the y-coordinate of m. Now, for the gradient we note that where we have abbreviated Γ b,a = Γ b,a (βm, t) and used the fact that α lies on the simplex edge α = (0, 1 2 , 1 2 ). This yields Equation (44). We will now derive equation (45). Note that the mixed partial derivative, which appears in the degeneracy condition (28), vanishes at partially symmetric points: Plugging in α = (0, 1 2 , 1 2 ), the right-hand side of the last equality in (52) vanishes because Γ 3,3 − Γ 3,2 = Γ 2,2 − Γ 2,3 for points m which have the partial symmetry m 2 = m 3 . Therefore the degeneracy condition (28) is in product form. We calculate the remaining partial derivatives: The partial derivative (53) is always positive for β < 8 3 . This means we only have to consider the zeros of (54). This yields equation (45).
We will now explain the parametrization of the set of solutions given in 2. First note that the variable β can be eliminated from Equation (45) using Equation (44) for all y = 0. When we set w = e gt + 1 we find that the resulting equation is a quotient of quadratic polynomials in w: Since w > 2, it suffices to consider the numerator of the left-hand side. The discriminant of this quadratic polynomial is given by D = ((3y − 1)e 3y + 12y) 2 + 8(6y + e 6y ).
It is positive for all real y. Therefore, this polynomial has two real roots. Because w > 2, we choose the larger of the two solutions where we have defined s = 3y and used the definition of F (s) in Equation (48). Furthermore, F (s) > 4 for s = 0 such that Equation (47) yields positive values for g t . Finally, the non-Gibbs inverse temperature is the minimal value of β along the curve given by the parametrization (46-47). Therefore we calculate the derivative of (46) which gives Since 4e s − F (s) is never zero for any s in (−∞, 0), we only have to consider the numerator of the fraction. We calculate the derivative of F Plugging everything together, dβ/ds = 0 is exactly fulfilled for the zero of the function defined in (50).
Lemma 9. Suppose β lies in the interval (β NG , 3). The entry time t NG (β) into the non-Gibbs region is given by
where y is the largest root in (− β 6 , 0) of y →2 β 2 + 24 βy + 72 y 2 − β 2 + 3 βy − 18 y 2 − 9 β e 6 y − 4 β 2 + 3 βy − 18 y 2 e 3 y . (61) Proof. The entry time t NG is given by the first entry of rockets into the unit simplex while increasing the time t and keeping β fixed. This is because, if the pentagrams unfold at all under increase of time, they unfold after the rockets have entered the unit simplex ∆ 2 . This will be clear in the next subsection where we compute the butterfly line. So let us consider the system (44-45) and fix any positive β < 3. Since the relation (4) between g t and t is strictly monotonically decreasing, we have to look for the maximal g t such that (β, g t , y) with negative y is a solution to the system (44-45), which defines the symmetric cusp exit line. Here, y is a magnetization-type variable. We can solve Equation (44) for w = e gt + 1 to obtain Plugging this into the left-hand side of the degeneracy condition (45), we arrive at 2e −6y 3β 2 2β 2 + 24βy + 72y 2 − (β 2 + 3βy − 18y 2 − 9β)e 6y − 4(β 2 + 3βy − 18y 2 )e 3y = 0. (63) This yields the expression of (61). Since the right-hand side of (62) is increasing with y, we have to pick the largest root of (61).
The butterfly unfolding (BU) line and butterfly exit temperature
The unfolding of the pentagrams is a very important mechanism since it changes the set of bad empirical measures from straight lines to Y-shaped, branching curves. This mechanism is already present in the static case, however, in contrast to the static case we have to deal with the fact that in some parameter regions the pentagrams do not fully lie inside of the unit simplex. This leads us to the definition of a butterfly exit inverse temperature β BE for which at some point in time t > 0 there is a cusp point on an edge of the simplex that is about to unfold into a pentagram. By definition, β BE lies between β NG and 8 3 . The value 8 3 is the first inverse temperature for which a beak-to-beak scenario inside of the unit simplex appears as we will see in Section 4.7. (46-47). The butterfly exit β BE is given by and γ s is the implicit function y = γ s (x) defined in a neighbourhood of (x, y) = (0, s 3 ) by the degeneracy condition (28). Note that equation (65) is explicitly computed by a computer program because its expression is very complicated. Nevertheless it is possible to plot the function (see Figure 6).
Proposition 10. Let v(m, β, t) = (ϕ β ) 2 • χ(m, β, t) be the parallel coordinate of χ(m, β, t) and let β(s) and t(s) be given by
Proof. Let us first fix β between β NG and 8 3 and a positive t. Consider a point α on the midpoint of one of the edges of ∆ 2 such that (α, β, t) belongs to the bifurcation set. Furthermore, without loss of generality by symmetry let us assume that α 2 = α 3 . To this point corresponds a degenerate stationary point m that has the same symmetry m 2 = m 3 . We can solve the degeneracy condition (28) in a neighbourhood of m in the form y = γ β,t (x) such that γ β,t (0) is the y-coordinate of m. In α-space in a neighbourhood of α = χ(m, β, t) we can now write the bifurcation set as χ(ϕ −1 β (x, γ β,t (x), 0), β, t). We know that the parallel component v of α fulfills when we follow the curve γ s through the bifurcation set. This is because it has a minimum before the pentagram unfolds and it has a maximum after the pentagram has unfolded. The curve γ of degenerate stationary points is obtained by solving equation (28) in the form y = γ(x) around (0, y * ) where y * is the parallel component of m * . Let us now compute the second derivative of the v-component of the curve: The other mixed partial derivatives of v vanish sinceγ(0) = 0 because of symmetry.
Furthermore, we computeγ(0) via implicit differentiation: Let us write f (x, y) for the left-hand side of (28) viewed as a function in the unit simplex in (x, y)-coordinates. By implicit differentiation we then find: And therefore:γ Using the symbolic calculus tools (see page 42) we can obtain an expression for (65). Using a similar approach it is possible to compute the line in the dynamical phase diagram for which we find butterfly points no matter where these points are with respect to the unit simplex. The key idea that the parallel component Then the butterfly transition time t BU (β) is given by and s * (β) < 0 is the largest zero of Proof. Using the same reasoning as in the proof of Proposition 10, we find that the point m maps under χ(·, β, t) to a point α that is about to unfold into a pentagram if where γ β,t is obtained by solving the degeneracy condition (28) in the form y = γ β,t (x) in a neighbourhood of the point m. This equation is now dependent on m, β and t, that is, we have one equation and three variables (m is onedimensional because m 2 = m 3 ). Additionally, since we know that the direction of degeneracy is the x-direction, we have the equation This equation can be solved for w = e gt + 1 which yields (70). Plugging this into (74), we are left to find the zeros of (73) for some fixed β in the interval (β BE , 8 3 ).
Reentry into Gibbs: the asymmetric cusp exit (ACE) line
In the β-regime (β NG , β BE ), three pentagrams unfold inside of the simplex at an intermediate time and leave the simplex as t increases further. Since we are interested in phase-coexistence of the first layer modelμ n (Lemma 4) and the phase-coexistence lines of the pentagram end in the asymmetric cusp points of the pentagrams, we must compute the exit time t G (β) of these points for β in the above regime. Like in the previous subsection, this is done using a combination of symbolic and numerical computation (see page 42). First, let us state the problem that we need to solve. (a) There is exactly one branch of solution with m 2 = m 3 and it is given by the graph of a map x → y = γ β,t (x).
Then the asymmetric cusps of the pentagrams are on the simplex edges if Proof. The location of the asymmetric cusps of the pentagrams on the curve x → χ(ϕ −1 β (x, γ β,t , 0), β, t) are given by the local maxima of the parallel component v(x) as a function of the curve parameter x (see Figure 8). This yields (78). Equation (77) comes from the constraint that the cusp point lies on the simplex edge because for points on the edge the parallel component equals − 1 6 β in the chart (26). Now, similarly to the case for the butterfly line, the computation ofγ β,t (x) by hand is impractical. Therefore we compute the expression symbolically with the help of the computer. This allows us to numerically determine the course of the line in the dynamical phase diagram. Now, because it is impossible to solve the degeneracy equation (28) in the form y = γ β,t (x) explicitly, we proceed as follows. Note that it is possible to solve (77) for β and plug it into equation (78). We then fix some value of g t , and numerically solve the system consisting of the degeneracy condition (28), where β is substituted from (77), and equation (78), where γ β,t is substituted by y anḋ where f denotes the left-hand side of (28) considered as a function of (x, y). This yields two equations in the two variables x and y.
The triple point exit (TPE) line
To each of the three pentagrams there belongs a special point, the triple point [see 23,Sects. 3.2]. This point is characterized by the coexistence of three global minima, that is, the functional values of all the three minimizers are equal. First, we discuss the existence of these points and then we determine for each fixed positive β the exit time t triple (β). This is the last time for which there are bad empirical measures with partial symmetry that lie inside the unit simplex.
there exists exactly one α in the hyperplane H with α 1 ≤ α 2 ≤ α 3 such that G α,β,t has precisely three global minimizers.
Since the pentagrams in the bifurcation slices leave the simplex (observation window), it is necessary for a discussion of the bad empirical measures that we find the time when the triple points leave the unit simplex. The problem that we have to solve is stated in the following proposition. The exit time t TPE (β) is then given by t TPE (β) = t(β, y (β)) where ϕ β (0, y (β)) and ϕ β (x(β), y(β)) lie in the fundamental cell m 1 ≤ m 2 ≤ m 3 and the triple (y (β), x(β), y(β)) is a solution to the following system of equations.
Note that the expressions of the equations (84 -86) are computed symbolically by the computer (see page 42 for more information). They are not displayed here because of their length. Figure 9 shows a contour plot of the HS transform G α,β,t with α = (0, 1 2 , 1 2 ) and (β, t) on the line TPE.
Proof. The system of equations mainly comes from two ingredients: equal depth of two minimizers and same end-conditioning α for these two minimizers. The triple point is characterized by a coexistence of three global minimizers and since a triple point α must fulfill the symmetry relation α 2 = α 3 , we find that it is sufficient to compare the two minimizers in the fundamental cell m 1 ≤ m 2 ≤ m 3 . Figure 10: The beak-to-beak mechanism is characterized by the merging of two horns of two different pentagrams. This merging joins two connected components of the complement of the bifurcation set slice when crossing the red line from right to left. As can be seen in the two rightmost plots, this merging happens on the axis of symmetry. The red dots in the dynamical phase diagram on the left mark the time -temperature pairs that correspond to the bifurcation set slices from left to right. The dots in the central plot correspond to the points of the same color in Figure 11.
Because α 2 = α 3 , we always have one symmetric stationary point so that the two minimizers have the coordinates (0, y , 0) and (x, y, 0). Since we now that either minimizer is a stationary point, we can use the vanishing of the first partial derivative of G α,β,t with respect to the y-coordinate to eliminate the time variable t from the equations. This yields the function in equation (83). Using this function we can eliminate the variable t from the equal depth condition and the other two equations that require that the minimizers belong to the same end-conditioning α.
The beak-to-beak (B2B) line
The beak-to-beak point in the static model is characterized as a cusp point that lies in a segment from the center of the simplex to one vertex, that is, for example it has y > 0. The following proposition describes the line of beakto-beak points and a parametric representation in terms of roots of a cubic polynomial. Note that, despite the fact that the line continues to exist for β > 3, the structural behavior of the bifurcation set around the beak-to-beak point might change in the regime β > 3.
Proposition 15.
Fix any positive β and t, let m be a point in H with coordinates (0, y, 0).
Proof. From the analysis of the static model [see 23, Figure 2, rightmost plot of the first row and neighbouring plots for smaller or larger β] we know that the beak-to-beak point (α * , β * , t * ) is such that if we fix α = α * but change the parameters β or t we either find that α = α * is contained in a cell with two minimizers or in a cell with one minimizer. Since α * lies on the axis of symmetry, we know α * = χ(m * , β * , t * ) where m * lies on the axis of symmetry as well, and we find in coordinates ϕ β (α * ) = (0, v(m * , β * , t * ), 0), so it suffices to study as a function of the y-coordinate of m. As before substitute w = e gt + 1. In Figure 11 you see a minimum and a maximum collide and form a saddle point. This is exactly the beak-to-beak behavior. The point (β, t) for which this collision has just happened is given by the vanishing of the first and second derivatives of v(m, β, t) with respect to the y-coordinate of m. Now, the derivatives are given by: Since w > 2, it suffices to consider the numerators of the above expressions. This yields equations (87) which is only fulfilled for y = 2 9 . However, this leads to the contradiction e gt = 1 e 4 3 −1 < 1 but g t > 0. Therefore, we can assume that we can solve (88) for β. Plugging this into equation (87) we arrive at the following fraction of polynomials in w.
We will now discuss the roots larger than 2 of this cubic polynomial. It is convenient to change variables θ = w − 2, so that we are interested in the positive roots of the following polynomial: Using Descartes' rule of signs, we know that the number of positive roots is equal to the number of sign changes among consecutive, nonzero coefficients of the polynomial or it less than it by an even number. Note that the coefficients in increasing order for s = 0 are given by (9, 12, 3, 0). Therefore we do not find any positive roots for very low positive values of s. The first sign changes appears for the coefficient of order zero which yields equation (92). All of the coefficients except the highest order coefficient eventually become negative. However, with increasing s this happens with increasing order of the coefficient so that we have only one sign change between consecutive coefficients for each s larger than s * . Thus, for all s > s * there exists only one root w * (s) larger than 2.
Reentry into Gibbs: the Maxwell triangle exit (MTE) line
For β in the interval ( 8 3 , 4 log 2) the model displays recovery as well but due to a different mechanism. After the horns of two pentagrams have touched, the Maxwell set which consisted of three connected components now has become one connected component. It consists of three straight lines on the axes of symmetry and a triangle with curved edges. The model recovers from the non-Gibbsianness when this triangle completely leaves the unit simplex which happens on another line in the dynamical phase diagram we call Maxwell triangle exit (MTE). where s > 2 log 2 and w * (s) is the unique zero in (2, ∞) of w → s 1 + (e s − 1)(we s − e s + w) (w + e s )(we s − e s + 2) + log (w + 1) 3 (w + e s ) 2 (we s − e s + 2) . (107) Proof. First, let us derive the system of equations (103-104). Since α has the full symmetry, that is, it is invariant under any permutation of S 3 , it suffices to consider the equal-depth of the central minimum m 0 with one of the three outer ones denoted by m. In the following, we assume m 2 = m 3 . The relative difference between the values is given by where Γ b,a = Γ b,a (βm, t). The partial derivative with respect to the x-coordinate of m vanishes because of symmetry. Plugging in the expressions for Γ 1,1 and Γ 2,1 yields (103). Now, let us come to the parametrization. Equation (105) follows by substituting w = e gt + 1 and s = 3y in equation (103) and solving for β which is possible since s = 0. Plugging this into equation (104) and making the same substitutions we find (107). Note that w * (s) is increasing with s and that the solution of w * (s) = 2 is s = 2 log 2. For lower values of s (107) has no zeros larger than two.
The elliptic umbilics (EU) line
In the static model there is a special point called elliptic umbilic. This catastrophe at the center of the unit simplex is responsible for the fact that the central minimum changes to a maximum. In the dynamical model -due to the additional parameter g t -we have a whole line of these points. This line we call the line of elliptic umbilics (EU).
Using the Taylor expansion of the logarithm and the exponential function, (111) follows by an elementary computation. Note that (113) is actually the HS transform of the static Potts model.
Using symbolic computation with the help of a computer, it is also possible to obtain a Taylor expansion for every pair (β, g t ) on the Elliptic umbilic line. Because of symmetry, the β-dependent coefficients of x 2 y and y 3 differ only by a factor of − 1 3 . This means that for any (β, g t ) on the Elliptic umbilic line the potential G α,β,t with α representing the uniform distribution has the following Taylor expansion up to order three around the simplex center. | 12,360.6 | 2020-10-31T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Targeting autophagy with small molecules for cancer therapy
Autophagy is a conserved lysosomal-dependent catabolic process that maintains the cellular homeostasis by recycling misfolded proteins and damaged organelles. It involves a series of ordered events (initiation, nucleation, elongation, lysosomal fusion and degradation) that are tightly regulated/controlled by diverse cell signals and stress. It is like a double-edged sword that can play either a protective or destructive role in cancer, by pro-survival or apoptotic cues. Recently, modulating autophagy by pharmacological agents has become an attractive strategy to treat cancer. Currently, a number of small molecules that inhibit autophagy initiation (e.g., ULK kinase inhibitors), nucleation (e.g., Vps34 inhibitors), elongation (e.g., ATG4 inhibitors) and lysosome fusion (e.g., chloroquine, hydroxyl chloroquine, etc .) are reported in pre-clinical and clinical study. Also a number of small molecules reported to induce autophagy by targeting mammalian target of rapamycin (e.g., rapamycin analogs) or adenosine 5’-monophosphateactivated protein kinase (e.g., sulforaphane). The study results suggest that many potential “druggable” targets exist in the autophagy pathway that could be harnessed for developing new cancer therapeutics. In this review, we discuss the reported autophagy modulators (inhibitors and inducers), their molecular mode of action and their applications
INTRODUCTION
Autophagy is a natural cellular process that occurs for the maintenance of cellular homeostasis.During autophagy, catabolic degradation occurs to recycle unnecessary, dysfunctional cellular components, damaged organelles and protein aggregates [1] .It also removes intrusive pathogens to protect our body from various infectious diseases and other disorders including cancer [2,3] .The autophagic process occurs when the cells are in stress conditions such as starvation or hypoxia.Based on these cues, the cytoplasmic contents (cargo) get sequestered within the autophagosome and then fuse with lysosome for cargo degradation.Three forms of autophagy have been reported: (1) chaperon mediated autophagy; (2) microautophagy; and (3) macroautophagy [4] [Table 1].
The type and mode of action differs from one another based on how and what target cargoes are subjected to lysosomal degradation.In chaperone mediated autophagy, the substrate with the targeting motif KFERQ is recognized by the chaperone Hsc70, and moves the individual protein substrate to lysosome degradation [5] .While, microautophagy is responsible for basal degradation of the cytoplasmic content by the direct invagination into lysosome [6] .On the other hand, macroautophagy, an ubiquitous pathway in eukaryotic cells, starts with the formation of double membrane structures called autophagosomes in which cargo sequestration occurs [3] .The early autophagosome is formed from components derived from endoplasmic reticulum, which acquires V-ATPase and LAMP to become late autophagosome [7,8] .Finally, the late autophagosome fuses with the pre-existing lysosomes to become the autolysosome [9] .The autolysosome contains unrecognizable cytoplasmic materials as they are in degradation and recycling process [10] .The multi-step process of macroautophagy (here onwards referred to as autophagy; Figure 1) is regulated by autophagy related genes, which were originally found in autophagy defective yeast mutants.
Autophagy is initiated through diverse signaling pathways in response to major stress response and plays a pro-survival role by nutrient recycling.The stress factors include low cellular energy levels, amino acid deprivation, growth factor withdrawal, hypoxia conditions, ER stress, oxidative stress, organelle damage and infection [11] .Over a period of stress, the cells employ autophagy process either to move contents or to degrade harmful components such as damaged mitochondria or invading pathogens through the process of lysosomal degradation [12] .Aberration in autophagy process has been implicated in a wide range of diseases including neurodegenerative disorders that involve the accumulation of pathogenic proteins, inflammatory disorders and cancer [4,13] .The following sections discuss various signaling pathways involved in the autophagy process, their role in cancer and other diseases; also small molecule inhibitors that target the autophagy process that are useful for cancer therapy are detailed along with the mode of interaction with their targets, if known.
Initiation step
Autophagy is a multi-step process involving, initiation, nucleation, elongation/expansion and closure steps [14] .The initiation of phagophore formation is governed by multi-protein complex known as ULK complex (Unc-51-like kinase 1, FIP200, ATG101 and ATG13), that integrates upstream nutrient and energy status and thereby initiates the process of autophagy [Figure 2].Each protein of the ULK complex has a unique role; ULK1, a serine-threonine protein kinase, plays a key role in the scaffold formation of ULK1-ATG13-FIP200 complex [15] ; ATG13, an adaptor protein mediates interaction between ULK1 and FIP200, and directly binds to LC3 as well.Another protein, ATG101 which is a subunit of ULK complex recruits downstream Atg proteins that are essential for autophagy [16,17] .Initiation step of autophagy process in normal nutrient conditions (e.g., sensed by the levels of growth factors, amino acids and glucose) is regulated by a negative autophagy regulator mammalian target of rapamycin (mTOR) that phosphorylates two subunits, ULK1 and Atg13 of ULK complex.The complex of mTOR that consists of mTORC1, Raptor and MLST8 as subunits directly binds ULK1 protein and thereby leads to ULK complex dissociation [18] .Upon nutrient starvation, mTORC1 dissociates from ULK complex and hence ULK1 gets activated and phosphorylates ATG13, ATG101 and FIP200 in order to initiate the phagophore formation [Figure 3].Investigations suggest that phosphorylation of ULK1 plays a vital role in the regulation of autophagy initiation process and at least 30 phosphorylation sites have been reported.However, the molecular basis and the molecules involved in ULK1 phosphorylation are yet to be completely unraveled [18] .
In contrast to mTORC1, adenosine 5'-monophosphate (AMP)-activated protein kinase (AMPK) indirectly activates ULK complex by phosphorylating TSC2 and raptor.Recently, AMPK has also been shown to directly interact with and phosphorylate ULK1 in a nutrient-dependent manner.Several phosphorylation sites, including Ser555, Ser637 and Ser757 were reported.AMPK phosphorylation site Ser555 is thought to recruit phospho-binding protein to the ULK complex [19] .During glucose starvation, AMPK targeting phosphorylation sites on ULK1 are trigged and contributes to ULK1 activation [Figure 2].AKT is a serine/ threonine kinase that acts as a sensor of growth factor levels in the cell and is activated in nutrient rich condition.Upon activation, AKT phosphorylates Ser9 of GSK3, which acts as inhibitory cue for GSK3.Dephosphorylation of Ser9 activates GSK3 and eventually the activated GSK3 phosphorylates TIP60 at Ser86 [20] .Further, TIP60 acetylates ULK1 and thereby increases the kinase activity of ULK1 [Figure 2].Finally, accumulation of activated ULK complex initiates the phagophore formation [20] .
Nucleation step
Phagophore formation starts with the nucleation step of autophagy process, where protein subunits including Vps34, Beclin1, Ambra1, ATG14 and p150 coordinate with each other and form a nucleation complex [21] .During nucleation, vesicular sorting protein 34 and its enzymatic product phosphatidylinositol-3-phosphate, an essential component play a vital role in recruiting other autophagy protein subunits such as WIPI-1, DFCP1, ATG5 and LC3.The effectors WIPI-1, WIPI-2 and DFCP1, binds to PI3P via WD repeats and FYVE domains, respectively [21,22] .In nucleation process, Vps34 associates with the phagophore membrane via p150 [Figure 3] that is anchored by myristic acid.Beclin-1 is the third important component for phospholipid kinase activity.Activity of Beclin-1 is affected by many different binding partners.Beclin1 dissociates from the anti-apoptotic factor Bcl2, which leads to the activation of Vps34.The association of beclin1 with Vps34 is stabilized by the other two components -UV radiation-associated resistance gene and beclin-1-associated autophagy related key regulator [23] .
Elongation step
In elongation step, the maturation of the autophagosome takes place with the help of two ubiquitin like conjugation system [Figure 4].In the early step of autophagosome maturation, Atg12 is activated by Atg7 and transferred to Atg10 thereby forming a covalent linkage with Atg5 [24] .This Atg12-Atg5 conjugate complexes with Atg16L and forms autophagy elongation complex.Carboxy terminal Gly residue of Atg12 is involved in the formation of thioester linkage with the active site Cys residue of Atg5 and Atg10, and also involved in amide linkage with Lys residue of Atg5 [25] .Elongation complex forms a dimer which provides a site for LC3 lipidation, a process required for association of LC3 with autophagosome membrane [26] .Although, Atg12 does not possess similarity with ubiquitin, it forms an ubiquitin-like fold and is involved in autophagy elongation step.For the autophagosome maturation, LC3 lipidation is very essential and acts as a second ubiquitin-like conjugation.This conjugation occurs in a series of reactions including proLC3-I cleavage by Atg4B, LC3-I activation by Atg7, transfer to Atg3 and finally conjugation with PE.Like Atg12, carboxy terminal Gly of LC3 is involved in thioester linkage with Cys residues of Atg7 and Atg3, and an amide linkage with PE [27] .These reactions are similar in LC3 homologues GABARAP, GATE-16 and mAtg8L.Completion of these maturation steps leads to autophagosome-lysosome fusion [28] and then degradation of the cargo.
Fusion and degradation step
Degradation and recycling of cellular components is a central function of all living cells to meet cell demands.In final stage of autophagy, matured autophagosome fuses with multivesicular endosomes and lysosomes.Degradation of cytosolic components is not a random process and thus several proteins such as Vps34/SKD1 and Rab11 involvement is necessary to accomplish autophagosome-lysosome fusion process [29] .
A recent study has reported that components of HOPS complex (homotypic fusion and protein sorting) plays a major role in the formation of autophagosome-endosome fusion.Moreover, dysfunction or absence of subunits of the ESCRT III complex and proteins required for biogenesis of endosomes severely affects the fusion process [30] .The fusion of inner membrane of autophagosome delivers the cytosolic proteins to lysosomes, where hydrolysis takes place to complete the degradation of the cargo.
ROLE OF AUTOPHAGY IN VARIOUS DISEASES
Dysregulation in the autophagy process results in various diseases.Defects or deregulation is especially important in cancer, ageing related disease, neurodegenerative diseases and lysosomal storage diseases [2] .In aging process, the functional role of autophagy is expulsion of aggregated protein which increases the lifespan; in the case of defective autophagy process, the formation of vacuoles and improper fusion of vacuoles with the lysosomes results in impaired protein flux [31] .In infectious disease, the functional role of autophagy is to remove the bacterial and viral pathogens through sequestration in autophagic vacuoles and then degradation.This provides immunity against pathogens [32] ; while a defective autophagy process provides a conducive environment for pathogens.In lysosomal storage disorders, the removal of lysosomal stock such as fatty acids, cholesterol are defective and an increased number of autophagosomes and reduced organelle turnover occur [33] .In the case of neurodegeneration disorders, the neuronal protein aggregates are removed by autophagic process; while, in the defective process the protein aggregates accumulate in neurons leading to neurodegenaration [34] .
Role of autophagy in cancer
Autophagy is a complex process that responds to a variety of stressful environments such as nutrient deprivation, abnormal protein accumulation and damaged organelle, and thereby maintains the cellular homeostasis [35] .Autophagy plays a cyto-protective role by clearing the damaged organelles, misfolded proteins and ROS, thus confining the genomic instability and aberrant mutations that ultimately leads to cancer.Consequently, autophagy machinery can be defined as cell survival mechanism in normal and as death mechanism in cancer cells.
However, deregulation of autophagy has been reported in a variety of diseases including cancer.Many reports have shown that autophagy plays dual role in cancer development.In early stage of cancer, autophagy suppresses/abolishes tumor formation by clearing the damaged proteins and organelles, and thereby induces cell death; whereas in advanced cancers, the stress mediated properties of autophagy has been hijacked by tumor cells to meet their increased metabolic requirements that are indispensable for tumor survival and rapid proliferation.Hence, autophagy has been reported as a tumor promoter in advanced cancers.Additionally, the regulation of autophagy through diverse signaling mechanisms can contribute upregulation/downregulation of tumor suppressor/oncogenes, and that can lead to inhibition/induction of cancer development [36,37] .For example, negative regulation of tumor suppressor genes through different signaling mechanisms (i.e., mTOR, AMPK, etc.) could induce autophagy and suppression of cancer initiation; whereas, activation of oncogenes could lead to inhibition of autophagy and promotion of cancer development.
It has been widely reported that autophagy modulates cancer growth and development, and this depends on cancer type, stage, and genetic context.A basal level of autophagy is considered as a cancer suppressive mechanism in normal cells.However, abnormal levels of autophagy in stressful conditions (i.e., Hypoxia, ROS, etc.) lead to inhibition of break down of damaged organelles and proteins, and subsequent cancer development.Nonetheless, it has been reported that mutation in autophagy related proteins leads to tumor suppression/promotion in a variety of cancers.For example, BECN1 related proteins (e.g., BIF-1 etc.) have been found abnormal/absent in gastric and colorectal cancer [38,39] .Further, mutation in UVRAG protein reported to reduce autophagy, and consequent colorectal cancer development [40] .On the other hand, an unexpected high basal-level of autophagy has been reported in several types of RAS-activated cancers (e.g., pancreatic cancers) and inhibition of autophagy in these cancers hinders the tumor formation [41] .
In order to identify alterations of different genes that are involved in autophagy signaling pathways, we have analyzed 1,087 cancer patient samples data from Cbioportal database (http://www.cbioportal.org/).Through this data analysis, we have noticed that mTOR gene shows high alterations with 12% (altered/profiled ratio = 128/1,087) and PIK3C3 shows 9% (altered/profiled ratio = 97/1,087) alterations.Further we also observed the alterations of other genes including ULK1 (5%), UVRAG (5%), beclin1 (2.7%), ATG4B (4%), ATG16L1 (2.1%), ATG5 (4%) and ATG12 (2.8%).Altogether, these studies explain that cell transformation and deregulation of many signaling pathways are connected directly or indirectly with autophagy modulation.These evidences, suggest that autophagy has a dual role in cancer and is dependent on biological factors such as the driving oncogene, tumor suppressor involved and tumor type.Hence, autophagy is considered as a double-edged sword, by both protecting from and promoting cancer [42,43] .
Autophagy acts as a tumor suppressor during tumorigenesis
Autophagy is widely documented as a tumor suppressive mechanism, as its deregulation leads to genomic instability, aberrant mutations, tumor formation and metastasis [44] .Primarily, the role of beclin1 in autophagy has been studied extensively.For example, mice having monoallelic deletion of beclin1 gene induce tumor formation.As is evident, the allelic loss of beclin1 was found in 40%-75% in breast, ovarian, and prostate cancers [44,45] .It is well documented that Beclin-1 promotes autophagy by binding to Vps34 via its conserved domain that was reported essential for tumor suppression.Recently, phosphorylation of multiple tyrosine residues of beclin1 has also been observed, which leads to a decrease in the activity of beclin1/ PI3KC3 complex and thereby the reduction of autophagy [46] .
The reduction of beclin1 protein levels is also reported in many brain cancers.A study conducted to investigate beclin1 mRNA expression in different histotypes of brain tumors reported the expression levels vary based on the type of tumor.After examining mRNA expression in 212 primary brain tumors, the study identified low expression in most high-grade ependymal neoplasms, astrocytic and atypical meningiomas; whereas, high expression in low-grade tumors and medulloblastomas [47] .Additionally, monoallelic deletion/ mutations in UVRAG protein and decreased expression levels of Bif-1 were also reported in colon, gastric, breast, prostate and bladder cancers [38] .The reported results clearly indicate that autophagy related protein Beclin-1 and its regulators (i.e., UVRAG and Bif-1) mediate tumor suppression.
Further, deregulation of several proteins of PI3K/Akt pathway were also reported to impair autophagy mechanism and can lead to tumorigenesis.For example, phosphatase and tensin homolog protein was reported to inhibit Akt survival pathway, and thereby induces autophagy mechanism.However, a mutation in PTEN gene leads to constitutive activation of Akt and inhibition of autophagy, leading to cancer formation [44,48] .Furthermore, the accumulations of p62 aggregates were reported to cause several cancers owing to impaired autophagy mechanism.Another study by Kang et.al. [49], identified the frameshift mutations with mononucleotide repeats in ATG genes in gastric and colorectal carcinomas.Further, this study suggested that these mutations are associated with cancer progression by autophagy deregulation.
The study investigated the expression of BNIP3 (Bcl-2/adenovirus E1B 19 kDa-interacting protein), a key regulator of mitochondrial autophagy in breast cancer and reported that BNIP3 expression is significantly lost in invasive breast cancers and suggested that breast cancers cells shows high proliferation with low BNIP3 expression.Thus, collectively these evidences suggest that autophagy plays essential role in tumor suppression and conversely its deregulation leads to cancer.
Autophagy acts as a cell survival mechanism in cancer cells
Genome-wide screening studies show that many genes are involved in the regulation of autophagy either through suppression or enhancement of autophagy [50] .High throughput analyses also contribute to the understanding of autophagy regulation at protein level and in terms of protein-protein interactions.Further, research in yeast and animals suggests that stress-induced autophagy under nutrient-limiting condition promotes cell survival by influencing the bioenergetics of the cell.A study conducted in human prostate cancer cells PC3 and LNCaP, and breast cancer cells MCF7 shows that autophagy acts as a survival mechanism in hypoxic tumor cells.Hypoxia inducible factor1, a positive regulator of autophagy enhances tumor metabolism and metastasis [51] , and thereby limits the radiation and chemotherapy.Hypoxia inducible factor1 is involved in the induction of BNIP3 and BNIP3L which disrupts beclin1-bcl2 complex, and releases beclin1 to induce autophagy.Further, BNIP3 induced autophagy acts as an adaptive survival mechanism in hypoxia tumors [52] .In vitro and in vivo studies also reported that autophagy acts as survival mechanism in squamous cell carcinoma by protecting endoG-mediated apoptosis [3] .
Additionally, it has been reported that RAS-activating mutations induce high basal-level of autophagy, and consequently assist in the development of lung, colon, and pancreatic cancers; hence inhibition of autophagy in these cancers hinders the tumor growth [44,48,53] .Genetic studies carried on mice also disclosed that deletion of autophagic gene FIP200 inhibits the cell growth in mammary tumors [41] .Moreover, mutations in BRAF protein reported to induce high levels of autophagy in CNS tumor [53] , melanoma [54] and thyroid cancers [45] ; while, inhibition BRAF lead to impaired autophagy and decreased cell proliferation and cancer growth.All together, these studies suggest that inhibition of autophagy could be an appropriate strategy for the treatment of cancer and targeting the autophagy pathway with small molecules would be fruitful.
AUTOPHAGY MODULATORS FOR CANCER THERAPY mTOR inhibitors
The mTOR, a member of PI3K family, is critical for serving as a primary regulator of cell growth, proliferation, metabolism and survival [55] .The catalytic subunit of both mTOR1 and mTOR2 complex is involved in many oncogenic signaling pathways.The hyperactive characteristic of mTOR in many human cancers led to target this protein kinase as therapeutic target.Therefore, inhibiting mTOR has gained much attention in anti-cancer therapy.Rapamycin [Figure 5] with two binding moieties is the first generation inhibitor of mTOR.In order to form ternary complex, one binding moiety of rapamycin binds with FKBP12 and the other binds with mTOR [56] .Initially, rapamycin was recognized as immunosuppressant which blocks T-cell activation and later on the anti-cancer activity was documented.Several rapalogs were generated by replacing C-40-O with different moieties and among them Temsirolimus is the first to get FDA approval for cancer treatment [57] .Recent studies have shown that rapamycin can also act as a cytostatic agent, slowing or arresting the growth of various cancer cell lines.
Pan-PI3K inhibitors
PI3K is an essential subunit of PI3K-AKT-mTOR pathway involved in cell proliferation and survival; it is a well known protein kinase for the induction of autophagy.In several cancers including diffuse intrinsic pontine glioma, glioblastoma, paediatric high-grade glioma, breast cancer and cutaneous melanoma, over activation of this pathway has been observed and hence inhibition of PI3K has become an important target in several cancers [58] .PI3K inhibitors including 3-methyladenine (3-MA) and wortmannin [Figure 6] are well characterized as autophagy inhibitors based on their inhibitory effect on the autophagy induction.Besides inhibiting PI3K with an IC 50 of 60 µmol/L, 3-MA has also been reported as inhibitor of Vps34 with an IC 50 25 µmol/L.Cell culture studies revealed that 3-MA suppresses cell migration and leads to cancer cell death under normal as well as starvation conditions [56] .Surprisingly, prolonged treatment (upto 9h) with 3-MA has shown autophagy flux promotion by increasing the autophagic markers levels such as LC3 protein [59] .
Wortmannin, a fungal metabolite is a selective, irreversible and potent inhibitor of PI3K that inhibits autophagic sequestration.It has been demonstrated that the lower concentration (nanomolar) of wortmannin potently and specifically inhibits PI3K; whereas, higher concentrations can inhibit the ataxia telangiectasia gene-related DNA-dependent protein kinase.At physiological pH (6-8.5),wortmannin compete with ATP and ATP analogs binds to PI3K, this suggests that wortmannin binds in the substrate binding site of PI3K.More importantly, site directed mutagenesis studies shows that Lys802 is essential to form nucleophilic interaction [60] .These observations of wortmannin interactions with PI3K provide the molecular basis for designing better inhibitors for PI3K kinase family proteins to treat cancer via autophagy inhibition.
Ly294002, an inhibitor of PI3K class family proteins is derived from the flavonoid quercetin.Ly294002 is not completely selective for PI3K family proteins, and additionally act on other unrelated proteins and lipid kinases [61] .Ly294002 binds at the ATP binding site and is more stable in solution than wortmannin.More importantly, this compound shows its inhibitory effect with IC 50 values of 0.5, 0.97 and 0.7 µmol/L for PI3Kα, β and δ targets, respectively [61,62] .To improve the selectivity and specificity, many Ly294002 analog were synthesized; SF1126 a prodrug of Ly294002 entered into clinical trial but was recently halted.
CLR457, a potential inhibitor of all PI3K isoforms is an orally bioavailable inhibitor with antineoplastic activity.It has been extensively characterized by in vitro biochemical methods and in vivo tumor xenograft studies [63] .Dose limiting toxicity studies show that CLR457 potently inhibited PI3K isoforms including p110α (IC 50 = 89 nmol/L), p110β (IC 50 = 56 nmol/L), p110δ (IC 50 = 39 nmol/L) and p110γ (IC 50 = 230 nmol/L).However, the characteristics such as poor tolerability and limited antitumor activity of CLR457 resulted in the termination of clinical development.
Recently, omipalisib (GSK2126458) is presented as an autophagy inhibitor that specifically binds to PI3K in PI3K/mTOR signaling pathway.It directly targets Akt phosphorylation by PI3K and reverse phosphorylation of Akt by mTOR.It is an orally bioavailable dual ATP-competitive inhibitor of PI3K and mTOR with high potency [64] .The indirect inhibition of Akt by omipalisib induces cytotoxicity and promotes autophagic cell death at 0.5 μmol/L dose.Further, investigations carried out to know whether the mechanism of cell death occurs through autophagy or apoptosis reported that there is no significant difference in treated and untreated cells when apoptosis markers were used [65] .Cell culture studies reported that GSK2126458 arrests cell cycle at G1 phase and affects proliferation of several cell lines such as breast cancer cell lines T47D and BT474 with IC 50 values of 3 and 2.4 nmol/L, respectively.Currently, omipalisib is in clinical trials for idiopathic pulmonary fibrosis and solid tumors.The difficulties in PI3K inhibitors such as lower solubility of wortmannin and broad spectrum inhibition of Ly294002 and limited antitumor activity of CLR457, further necessitate the identification of new inhibitors for PI3K.
Pan-mTORC inhibitors
Activation of mTORC1 and mTORC2 is important in many cancers.Compounds with ATP competitive mechanism inhibiting both mTORC1 and mTORC2 offer better alternative to rapalogs.Torin 1 [Figure 7], is a commercially available autophagy inhibitor that is selective and highly potent ATP-competitive inhibitor of both mTORC1 and mTORC2.Torin 1 shows more efficacy towards blocking the phosphorylation of mTORC1 and mTORC2, when compared with rapamycin a well-known classical mTOR allosteric inhibitor.
In vitro kinase assay reveals that Torin 1 has an IC 50 of 3 nmol/L, 3 µmol/L, 1.8 µmol/L and 1 µmol/L for mTOR, hVps34, PI3K-α and DNA-PK, respectively.The results show that Torin 1 is more selective for mTOR inhibition over other kinases [60] .Second generation ATP-competitive inhibitor, Torin 2 is a potent and selective inhibitor of mTOR with better pharmacokinetics profile to overcome the limitations of Torin 1.In vitro studies revealed that Torin 2 reduced cell proliferation in several cancer cell lines and exhibited combinatorial response with AZD6244, an inhibitor of MEK kinase in the molar ratio of 1:50.Torin 2 shows inhibition of several PI3KK family proteins including mTOR, ATM, ATR and DNA-PKs with an IC 50 value of less than 10 nmol/L [66] ; whereas, Torin 1 inhibits only mTOR, ATR and DNA-PK.Further, effects of Torin 2 in autophagosome formation were examined and found to induce autophagy [66] .
AZD8055 is another novel ATP-competitive inhibitor of mTOR kinase (IC 50 of 0.8 nmol/L) that shows approximately 1000-fold selectivity against other kinases [67] .Remarkably, in xenografts studies, AZD8055 shows substantial growth inhibition and suggests that AZD8055 can be a potent therapeutic drug in many human cancer treatments [67] .AZD8055 treated acute myeloid leukemia cells have shown significant decrease in cell cycle progression and cell proliferation in blast phase.More interestingly, AZD8055 treatment results in decreased growth of leukemic progenitors but not normal immature CD34+ cells [68] .Another independent study also revealed that AZD8055 treated chronic lymphoid leukemia (CLL) cells show significant reduction in CLL cell proliferation and increase in apoptosis.Currently, this inhibitor is in phase In vivo studies, further witnessed that WYE-354 administration in severe combined immunodeficient mice inhibited growth of xenografts [69] .PP30 is an adenine-mimetic pyrazolopyrimidine scaffold compound that selectively inhibits mTORC1 (IC 50 = 8 nmol/L) and mTORC2 (IC 50 = 80 nmol/L).
PI3K/mTOR inhibitors mTOR shares high sequence homology with the hinge region of PI3K, as they belong to the same family of phosphatidylinositol 3-kinase.Hence, several small molecules target both mTOR and PI3K simultaneously.PI-103 [Figure 8] belongs to pyridofuropyrimidine class of compounds and is a multi-target inhibitor that inhibits PI3K and mTOR.Studies using human leukemia cell lines including MV4-11, OCI-AML3 and MOLM14 clearly indicated that PI-103 treatment arrested the cell cycle at G1 phase and eventually reduced the cell proliferation in these cells.The effects of PI-103 in AML patient samples have shown that 82% reduction of AML progenitor clonogenecity.The significant increase in apoptosis is also observed in blast cells when treated with 1.0 µmol/L of PI-103.On the other hand, PI3K/Akt and mTOR inhibition has also been shown when the AML blast cells treated with RAD001 and IC87114 (RAD + IC); but the mechanism of antiproliferative effect is yet to be elucidated.At the same time, this study also reported that inhibitory effect of PI-103 is not much higher than that of RAD + IC in AML blast cells [70] .
A novel inhibitor apitolisib, also known as RG7422/GDC-0980 is an orally available dual PI3K and mTOR inhibitor with excellent pharmaceutical and pharmacokinetics properties.The GDC-0980 inhibition of PI3K and mTOR overexpression has shown significant reduction in tumor cell growth by inducing apoptosis.In These preclinical data show high potency and selectivity of GDC-0980 inhibitory activity.However, GDC-0980 has less effectiveness than everolimus in metastatic renal cell carcinoma [72] .
NVP-BGT226, a potent orally bioavailable dual inhibitor of PI3K and mTOR signaling pathways, blocks cell cycle at G0/G1 phase and induces autophagy and apoptosis.It is shown that NVP-BGT226 suppresses the growth of primary myeloma and common myeloma cell lines at nanomolar concentrations.Specifically, NVP-BGT226 inhibits PI3Kα, β and γ isoforms with IC 50 values of 4, 63 and 38 nmol/L, respectively.The analysis of NVP-BGT226 effects in hepatocellular carcinoma (HCC) shows cell growth and proliferation inhibition with potent cytotoxic activity.Hence, the capabilities of NVP-BGT226 in targeting PI3K and mTOR may represent it as an anticancer agent in HCC [73] .NVP-BEZ235, an imidazo[4,5-c]quinoline derivative, is a dual kinase inhibitor of PI3K and mTOR that binds to the ATP binding site and halts cell cycle at G1 phase.When the compound is given orally to animal models, it displayed disease stasis of human cancers [74] .Although co-crystallization studies of this compound with targets are ongoing, docking studies revealed that NVP-BEZ235 forms H-bond with ATP binding cleft residues including Val851, Asp933, and Ser774 of PI3Kα homology model.This compound has entered phase1 clinical trials for the treatment of breast cancer, advanced solid tumors and cowden syndrome.
Unc-51-like kinases inhibitors
ULK belong to serine/threonine kinase family proteins, and play a crucial role in autophagy regulation [34] .Humans contain four ULK kinases including ULK1, ULK2, ULK3 and ULK4.Among them, ULK1 is well studied and it is utmost important for autophagy initiation.Under nutrient deprivation, ULK1 is activated by several up-stream signals (e.g., AMPK, etc.) and then initiates autophagy process by recruiting various other proteins (i.e., FIP200, ATG101 and ATG13 for ULK complex) to the on-site of autophagy initiation.Thus, ULK1 and its associated proteins (i.e., ULK complex) play essential roles in cell survival mechanism under nutrient deficiency [75] .However, disruption of ULK1 and its associated protein complex lead to autophagy inhibition and cell death.As cancer cells generate energy and nutrients through autophagy mechanism and eventually help in cell survival and tumor progression, disruption of ULK1 function by developing small molecule inhibitors has become an attractive approach to treat cancer [75,76] .As a proof of concept, few ULK1 inhibitors have been reported in the literature [Figure 9].
MRT67307 and MRT68921 are two closely related derivatives with different substitution pattern on the pyrimidine ring.MRT67307 inhibits ULK1 and ULK2 with an IC 50 of 45 and 38 nmol/L, respectively.This compound led to the identification of MRT68921, that has 15-fold improved inhibition of ULK1 (IC 50 = 2.9 nmol/L) and 30-fold improved inhibition of ULK2 (IC 50 = 1.1 nmol/L), when compared with MRT67307.Studies using MRT67307 and MRT68921 in MEFs cells show that these compounds are able to block autophagic flux.In addition, MRT68921 treated cells show significant increase in SQSTM1 level and decrease in LC3-2/LC3-1 ratio.These results suggested that MRT68921 treatment efficiently inhibits ULK-mediated autophagy [77] .However, the molecular basis of this block remains to be elucidated.SBI-0206965 is a potent and specific inhibitor of ULK1 with an IC 50 of 108 nmol/L and also selectively inhibits ULK2 with IC 50 of 711 nmol/L, but with less efficacy when compared with ULK1.Inhibition of ULK1 in NSCLC cells results in anti-proliferative effect [51] .Specifically, SBI-0206965 suppresses phosphorylation events in cells that are mediated by ULK1.SBI-0206965 at 10 µmol/L concentration shows high selectivity and inhibits only 10 out of 456 kinases.
In 2015, Lazarus et al. [75] reported the first crystal structure of ULK1 in complex with multiple inhibitors.The structure consists of an N-terminal kinase domain, a serine-proline rich region, and a C-terminal interacting domain.They used a standard 32 P-ATP radioactive assay to screen a collection of 746 compounds against ULK1 that led to the identification of several pyrazole aminoquinazolines as ULK1 inhibitors.For example, the identified compound 2a showed dose dependent inhibition with IC 50 of 160 nmol/L when re-tested in in vitro assay.Further, co-crystallization of this compound with ULK1 demonstrated that compound 2a [Figure 10] bound in the ATP binding site by making hinge region interaction with its amino pyrazole core group.Moreover, they observed that the aniline moiety made contacts with Asp165 of DFG motif, and the cyclopropyl substituent moiety fitted into the pocket close to the gate keeper residue methionine.Quinazoline ring interacts through position 6 and 7 with the backbone of kinase.In conformity with this steric obstruction, no compounds with substituents at these positions showed activity against ULK1.
Further, modifications made to these series of compounds led to the identification of compound 2b with improved potency with an IC 50 of 8 nmol/L [56] .However, the co-crystallization of this compound in similar condition (as of compound 2a) with ULK1 produced different space groups that indicate compound 2b produced conformational changes in the kinase domain, which led to the improved activity.Detailed analysis revealed that there were major changes in the conformation of interlobe loop, the side chain of Asp165 (DFG loop residue) and methionine of gatekeeper residue [75] .Testing of compound 2b against a small panel of kinases showed non-selectivity, suggesting the need for improving the selectivity and potency against ULK1.
Vacuolar protein sorting 34 inhibitors
Vps34 is a lipid kinase that belongs to subgroup of class III PI3K family protein.The major function of this family protein is to phosphorylate the 3-hydroxyl group of inositol ring of phosphatidylinositol (PtdIns) lipid substrates to generate PtdIns3P [22,79,80] .Vps43 interacts with multiple protein subunits and forms Vps34 complex I (i.e., Vps34, Vps15, Beclin 1, and Atg14L) that precisely catalyzes the phosphorylation of PtdIns to PtdIns3P and anchors with intracellular membranes.Further, the membrane-bound PtdIns3P interacts with proteins that contains FYVE, PX, or WD40 domain and involves in vesicle trafficking and autophagy mechanism.Thus, Vps34 complex is essential for initiation/induction of autophagy during nutrient deprivation through regulation of mTOR pathway.Therefore, the disruption of Vps34 complex leads to autophagy inhibition and cancer cell death.Hence, Vps34 has become an attractive drug target for cancer therapy and a number of inhibitors have been reported in the literature, and few compounds have entered into clinical studies.
Recently, Dowdle et al. [81] , identified a hit containing bisaminopyrimidine core as a potent and selective Vps34 kinase inhibitor using high-throughput screening of compound libraries.Optimization of this 11] with improved potency and selectivity [81] .Additional biochemical testing and profiling of this compound showed that PIK-III is at least 100-fold selective for Vps34 as compared to related kinases including PI3Kα, mTOR, and additional 44 protein kinases.Moreover, they also demonstrated that PIK-III inhibited the colocalization and distribution of PtdIns3P specific lipid binding domain (FYVE) fused with GFP with an IC 50 of 55 nmol/L concentration and that is > 10,000 times more potent than the non-selective Vps34 inhibitor 3-MA.
Co-crystallization of PIK-III with human Vps34 kinase [Figure 12A] revealed that the overall structural geometry was comparable to that of PI3Kγ and Drosophila melanogaster Vps34 structures [81] .Further, analysis revealed that the structure appears like a typical lipid and protein kinase structure that has a relatively narrow active site with hydrophobic pocket appropriate for binding co-planar aromatic compounds.Binding mode of PIK-III to Vps34 structure demonstrated that the cyclopropyl group occupied the hydrophobic pocket that was formed with side chains of Phe612, Pro618 and Phe684.Two hydrogen bond interactions were observed between PIK-III acceptor/donor and backbone amide & carbonyl oxygen of Ile685.In addition, solvent mediated hydrogen bond network bridges were observed between aminopyrimidine moiety of PIK-III and side chains of Asp671 and Asp644.Superimposition of Vps34 and PI3Kα active site revealed that in both the structures the hydrophobic cavity was enclosed/covered with P-loop residues, however their relative orientations were quite different with respect to their hinge regions.In Vps34 it is relatively displaced towards hinge region, whereas in PI3Kα structure it is wider and proximal to hinge region.In Vps34 structure, the relative orientation of Phe612 has significant role as it allows cyclopropyl group to fit into the hydrophobic pocket to acquire optimal interaction with hinge region.Whereas, in PI3Kα structure the corresponding phenylalanine was replaced with methionine residue and it doesn't allow the cyclopropyl group to fit into the pocket.Thus, this structural difference makes an ideal tool for developing selective inhibitors of Vps34 and also to explicitly measure pharmacological consequences of VPS34 inhibition in vivo.
Further, docking and structure-activity relationship (SAR) study led to the identification of compound 3a with greater potency and improved metabolic stability providing an excellent candidate for in vivo pharmacokinetics evaluation [80] .Compound 3a showed exceptionally selective activity over other lipid and protein kinases (> 100-fold against more than 280 kinases evaluated, except TAK1 and PI3Kδ).Cocrystallization of compound 3a with Vps34 [Figure 12] revealed that its binding mode is similar to that 10B] showed that it has similar orientation with the diaminopropyl group occupying a similar space as that of quinazoline of compound 2a.The main difference in the kinase is the movement in the β sheet in the N-terminal lobe with Gly23 towards the inhibitor, which allows Ile22 to twist away from the bulky diaminopropyl substituent on the pyrimidine.The other difference is the gatekeeper methionine moves towards iodine group, to adopt a suitable dipole-dipole interaction.A flexible region involving Ile22 was required to pack above the aminopropyl group [78] of PIK-III.Moreover, compound 3a prevented the degradation of various autophagy substrates (p62, NCOA4, NBR1, NDPS2 and FTH1) as that of PIK-III.Pharmacokinetics profile of compound 3a revealed that it is rapidly absorbed and showed moderate mean systemic clearance (30 mL/min/kg) with good oral bioavailability (F% = 47).Oral administration of compound 3a to RKO colon cancer bearing mice at 50 mg/ kg twice a day for 7 days showed time-dependent accumulation of LC3-II with reduced autophagy capacity without reduction of tumor volume.Hence, additional studies with long term drug administration need to be evaluated.
Pasquier et al. [82] , reported a series of tetrahydropyrimidopyrimidinone derivatives [Figure 11] as Vps34 inhibitors using a cell based high-throughput phenotype screening campaigning.The reported compounds comprise/contain a pyrimidinone moiety with a morpholine group as hinge binder and shown to have Vps43 inhibition in kinase profiling assay panel.However, these compounds also showed cross-reactivity with class I PIKs (isoformsα, β, δ, and γ) and at a lower concentration level, with mTOR.Further, evaluation of these compounds activity against Vps34 using GFP-FYVE cell based assay revealed that compound 3b shows higher cellular potency compared to other compounds [82] .Hence, compound 3b was selected as advanced hit for back screening.Additional screening of tetrahydropyrimidopyrimidone derivatives and SAR studies focused on potency and selectivity over lipid kinases.Substitution at 1-position of the pyrimidinone scaffold led to the identification of compounds 3c, 3d and 3e [Figure 11] with enhanced Vps34 enzymatic potency, substantial GFP-FYVE cellular potency, attractive ligand efficiency (LE) and ligand lipophilicity efficiency (LLE) values.Moreover, these compounds also showed favorable in vitro ADME properties and sensible microsomal stability.
X-ray co-crystal structure of compound 3c [Figure 12C], 3d and 3e with human Vps34 demonstrated that all the compounds adapted a DFG-in conformation and were involved in hinge region interaction via oxygen atom of the morpholine ring.Moreover, this moiety was also involved in making promising vander Waals interactions with adjacent residues.The aromatic ring of pyrimidinone moiety was stacked in between Ile634 and Ile760 side chains, whereas carbonyl function of this moiety was involved in making hydrogen bond interaction with catalytic Ly636 side chain, and water mediated H-bond interactions with Asp644 and Tyr670 side chains.The (2S)-trifluoromethyl group of tetrahydropyrimidine ring was pointed towards the hydrophobic pocket under the P-loop residues Phe612 to Ala619.Compounds 3c-e show difference only at their N-substituent that points towards the exit of the ATP binding site.Further, modification of hinge binder moiety led to improved selectivity, and replacing the trifluoromethyl moiety on In vitro studies revealed that compound 3f displayed IC 50 values of 2 and 82 nmol/L on Vps34 enzymatic assay and GFP-FYVE cellular assay, respectively.Biophysical characterization using surface plasmon resonance and isothermal titration calorimetry demonstrated that the compound 3f has a binding affinity (K D ) of 2.59 and 2.7 nmol/L, respectively.Moreover, compound 3f showed good physiochemical/druglike properties (such as LE = 0.41, and LLE = 6.22).Further, co-crystallization of compound 3f with Vps34 [Figure 12D] revealed that methyl group of morpholine moiety points towards Met682.Like compound 3e, water mediated hydrogen bonds were observed between carbonyl oxygen atom and Asp761 of Vps43.Thus, this binding orientation provided enhanced Vps34 selectivity and potency.In vivo pharmacokinetics (PK) profiles of compound 3f disclosed a good oral bioavailability (F% = 85) with maximal plasma concentration observed at 0.5 h and reasonable systemic clearance.Moreover, PK/PD experiments using GFP-FYVE H1299 tumors xenografted in SCID mice revealed that compound 3f had sustainable inhibition (> 80%) of granular staining, and a dose dependent target modulation.
Lysomotrophic agents
Chloroquine [Figure 13], a widely used inhibitor of autophagy which inhibits last stage of autophagy is initially discovered to treat malaria and inflammatory diseases.Although, the mode of action of bafilomycin A1 and lysosomal protease inhibitors were well characterized, mode of action of chloroquine still remains unclear.However, it is believed that chloroquine inhibits autophagic flux through rising pH and thereby inactivates lysosomal hydrolases [83] .Currently, chloroquine (CQ) and hydroxychloroquine (HCQ) are being investigated as autophagy modulator in Phase II/III trials for cancer therapy [84] .Recent in vitro and in vivo studies revealed that chloroquine affects the endo-lysosomal system and golgi complex thereby modulates autophagic flux through decreasing autophagosome-lysosome fusion [85] .This study invalidated lysosomal degradation function of chloroquine.HCQ, a derivative of CQ is a 4-aminoquinoline that has antimalarial and anti-inflammatory activities; it is currently being investigated as the inhibitor for autophagy.Several clinical trials of HCQ in combination with other anti-cancer drugs (e.g., temozolomide, bortezomib, temsirolimus, vorinostat, doxorubicin, etc.) showed partial response and stable disease outcome for various cancers (melanoma, colorectal cancer, myeloma and renal cell carcinoma) [86] .As a drug, this basic compound alkalinizes acidic environment of lysosomes and thereby prevents autophagosome-lysosome fusion.HCQ is proved to be threefold less toxic than CQ and can augment the cytotoxicity of a number of chemotherapies and targeted therapies.A recent meta-analysis of clinical trials of CQ and HCQ concluded that their use in cancer patients has better treatment response, when used in combination with existing anti-cancer therapy [87] .
Based on the 4-aminoquinoline core structure of CQ and HCQ, Lys05 was designed; it was more potent in in vitro and in vivo as a single agent.The increased activity of Lys05 was due to the bivalent aminoquinoline rings, C7-chlorine and a short triamine linker.Lys05 trihydrochloride is water soluble and shows potent anti-tumor activity in several human cancer cell lines as a single agent.Intermittent high dose or chronic daily dosing of Lys05 at lower doses have shown early blockage of autophagy in melanoma and colon cancer xenograft models [88] .Comparative study in cancer cells also revealed that HCQ at 100 µmol/L cannot show complete deacidification of endovascular compartment; whereas 50 µmol/L of Lys05 has shown complete deacidification.Further, in mice models Lys05 at high dose (80 mg/kg, i.p.) causes Paneth cell dysfunction with loss of lysozyme biosynthesis and bowel pseudo-obstruction [88,89] .Studies evidenced that high dose of Lys05 is associated with intestinal toxicity and it has been also observed that high dose of HCQ also causes low grade nausea and constipation in patients.Lys05 is a new lysosomal inhibitor that has a potential to be further developed as a drug for cancer treatment.
Autophagy inducers
In addition to the above agents/compounds that were designed to modulate the autophagy process by specifically interacting with targets in autophagy pathways, there are other agents that induce autophagy.Particularly, naturally occurring compounds have multiple modes of action and targets different pathways.Some of them induce autophagy by targeting autophagy pathway signaling molecules and are discussed in this section [Figure 14].
Curcumin
Curcumin, a natural compound of golden spice turmeric shows numerous activities including antiinflammatory, antimicrobial, antioxidant, hypoglycemic and wound healing activities.Considering these activities, curcumin has been investigated in many clinical conditions such as multiple myeloma, breast cancer and non small cell lung cancer.Although, it has proven efficacy in several clinical aspects, curcumin has therapeutic limitations due to rapid systemic elimination, rapid metabolism and poor absorption.Curcumin showed anticancer effects by sensitizing chemotherapy and radiation therapy [90] .In gastric cancer cell lines SGC-7901 and BGC-823, curcumin significantly inhibited cell proliferation by exhibiting autophagy induction.The studies have reported that curcumin induces autophagy by its dual functionality in up-regulation of p53 and p21, and down-regulation of PI3K/Akt/mTOR signaling pathways [91] .Another recent investigation also reported that curcumin induces autophagy in human pancreatic cancer cell lines [91] .Curcumin treatment of human lung adenocanrcinoma cell line A549 showed autophagy induction by means of increased AMPK phosphorylation [92] .
Quercetin
Quercetin, a well-known cancer therapeutic agent and autophagy modulator is abundantly present in vegetables and fruits, and suppress tumor proliferation [93] .It has been shown that quercetin suppresses Akt/ mTOR and Bcl-2 signaling pathways and activates LC3, ERK and caspase.Studies in human neuroglioma cells U87 and in vivo studies revealed that quercetin nanoparticles significantly inhibit neuroglioma cell proliferation by inducing autophagy and apoptosis.Cell proliferation assay of neuroglioma cells showed that quercetin nanoparticles treatment with 10-50 µg/mL gradually reduced cell viability [94] .A recent study reports that quercetin modulate p-STAT3/Bcl2 pathway and there by induces protective autophagy in ovarian cancer [95] .
Obatoclax mesylate
Obatoclax, a small molecule indole bipyrrole compound is an inhibitor of Bcl2 family proteins and exhibited anticancer activity in acute lymphoblastic leukemia [96] .In addition to its involvement in apoptosis, obatoclax has also been involved in triggering cell death via autophagy induction [97] .Primarily, this compound plays a key role in stimulation of necrosome assembly on autophagosome membrane to induce autophagy.Coimmunoprecipitation studies indicated that obatoclax enhances the interaction between Atg5 and necrosome components including FAD, RIP and RIP3.The same study investigated the role of obatoclax with 200 nmol/ L concentration in rhabdomyosarcoma cells to unravel whether cell death occurs via apoptosis or autophagy and noticed little DNA fragmentation when the most of cell lost their viability.
Spermidine
Spermidine is a polyamine present in most of the mammalian cells and has important roles in several cellular activities under physiological conditions.It is well known for its longevity-promoting activity in association with autophagy enhancement.A study conducted in differentiated rat pheochromocytoma PC12 and human embryonic kidney 293 cells, revealed the possible role of spermidine in autophagy.After treating the cells with spermidine, the cell death analysis and caspase activity assays indicated that spermidine restricts neuronal injury via inhibiting caspase-3 dependent Beclin1 cleavage.Eventually, this mechanism elevates dysregulated autophagy flux levels [98] .Very recent immunoprecipitation and immunohistochemistry studies in human chondrocytes and in vivo study conducted in mice reported that the cells treated with spermidine (100 nmol/L) showed autophagy activation and promoted chondrogenesis [98] .However, spermidne role in cancer is still controversial as some studies reported that spermidine do not induce tumorigenesis [99] , whereas some studies established that spermidine synthase inhibitors show little cell growth reduction in cancer cell lines [100] .
Sulforaphane
Sulforaphane, an organic isothiocyanate compound derived from glucosinolates shows both cytoprotective and cytotoxic activities.It activates nuclear factor E2-related factor 2 signaling that elevates the expression of antioxidant response proteins in oxidative stress.A study in murine osteosarcoma cells reported that sulforaphane induces apoptosis through cell cycle arrest and inhibits tumor cell growth [101] .It has also been reported that the treatment of sulforaphane initiates various cellular processes in human prostate cancer [102] .To investigate whether sulforaphane is reducing the cell growth or encourages cell death, a study was conducted in human lens cell line and reported that sulforaphane reduced the cell viability in human lens cell line and promotes cell death.The study also revealed that sulforaphane promotes ER stress and autophagy induction via MAPK signaling [103] , and longer treatment with sulforaphane has shown significant decrease in AMPK phosphorylation at thr-172 [104] in human prostate cancer cells.
FDA approved drugs with autophagy modulation activity
In addition to the above discussed autophagy modulators, few FDA approved drugs (for different indications) are reported to have additional autophagy modulator activities [Table 2].This section is intended to list out these drugs and provide brief information and mode of autophagy modulation by these drugs.Some of these drugs are already used as anticancer agents, while some are used for other indications.These data suggests that some of the approved anticancer drug may be partly working through autophagy modulation.Moreover, understanding the exact molecular mode of action of these drugs could help to repurpose these drugs for cancer therapy in the future.
Temozolamide, an alkylating agent is a FDA approved drug for the treatment of glioblastoma multiforme in combination with radiotherapy [105] .Recent studies have investigated the role of temozolamide in autophagy modulation and reported that it induces autophagy in glioblastoma cancer cells through EGFR independent manner [106] .Temozolamide also showed cytotoxicity in PI3K/AKT/mTOR pathway inhibited adenoma cells [107] .Gefitinib which targets tyrosine kinase activity of EGFR and binds competitively to the ATP binding site is a FDA approved drug for treating the patients with non-small cell lung cancer (NSCLC) [108] .Gefitinib induces autophagy in lung cancer cells through blocking of PI3K/AKT/mTOR pathway [109] and also exhibited autophagy induction in combination with clarithromycine in NSCLC cells [110] .
Metformin, a biguanide antihyperglycemic agent is FDA approved for treating non-insulin-dependent diabetes mellitus.It controls glucose level by means of decreasing hepatic glucose production and also by increasing insulin-mediated glucose uptake.It is reported that metformin is involved in autophagy induction by AMPK dependent manner.Several cancer models have shown significant growth inhibition upon metformin treatment [111] .A study also reported that metformin promotes autophagy and selectively inhibits esophageal squamous cell carcinoma cell growth by down regulating STAT3 signaling [112] .Studies on human multiple myeloma cells show that metformin inhibits the cell proliferation by promoting autophagy and cell cycle arrest and this study suggested metformin dual repression of mTORC1 and mTORC2 via AMPK activation [113] .
Bortezomib is a proteosome inhibitor that specifically inhibits nuclear factor kappaB and is a FDA approved drug to treat multiple myeloma [114] .Bortezomib has shown anticancer activities in several human cancers including prostate cancer, colon cancer, ovarian cancer and breast cancer.Studies have explored the possible role of Bortezomib in autophagy and found that it promotes cancer cell death through blockage of autophagic flux in ERK phosphorylation dependent manner [115] .Sodium phenylbutyrate is a chemical chaperon that inhibits histone deacetylase and is FDA approved drug to treat urea cycle disorders.Sodium phenylbutyrate is also under clinical investigation in several human diseases including hemoglobinopathies, motor neuron diseases, cancer and cystic fibrosis [115] .Sodium phenylbutyrate has been shown to reduce dithiothreitol or tunicamycin induced autophagy [116] .
Carbamazepine is a FDA approved anticonvulsant drug for the treatment of epilepsy.Evidences indicate that carbamazepine diminishes hepatocellular death in autophagy dependent manner.A study on SW480 colon cancer cell lines revealed that carbamazepine decreases β-Catenin and VEGF levels, resulting in antitumor activity [117] .Verapamil and Rilmenidine are well known drugs for the treatment of hypertension and now these drugs are under investigation to unravel their role as autophagy modulators in cancer.Autophagy induction has been observed upon treatment with verapamil in COLO 205 cells with cytoprotective activity [118] .It has been observed that Rilmenidine promotes autophagy in an mTOR independent manner, however the reduction in disease progression has not been observed [119] .
Pantoprazole is a protein pump inhibitor used to treat certain esophagus and stomach related problems.
Various studies reported that pantoprazole sensitized cancer cells to anticancer drugs by suppressing autophagy induction in time-and dose-dependent manner [120][121][122] .Celecoxib is a nonsteroidal antiinflammatory drug that is FDA approved for Rheumatoid arthritis and osteoarthritis treatment.Recent studies have evaluated celecoxib activities in cancer and reported its antitumor effects in solid tumors.Also autophagy reduced significantly upon treatment with celecoxib in imatinib resistant chronic myeloid leukemia cells [123] .All together, although these drugs have proven to be useful in various human diseases, currently many studies are investigating their impact in cancer though autophagy modulation.
CONCLUSION
Autophagy is a conserved cellular process that is essential for the cells to cope-up with adverse conditions, such as the lack of nutrients or bacterial/viral infections.In such conditions, autophagy plays a pro-survival role by recycling the cellular components by lysosomal degradation and also helps in the removal of pathogenic organism by sequesterization and degradation.Because of its central role in maintaining cellular homeostasis, alterations in the autophagic process through aberrant signaling has been linked to several disease states such as cancer and neurodegeneration, among others.In cancer, autophagy has the role of a double edge sword, with tumor suppressing activity during the initial stages of cancer development and prosurvival effects in the later stages of cancer development.
The relevance of autophagy modulation in cancer is increasingly appreciated and many therapeutic targets involved in autophagy process have been identified.In this review, we highlighted the role of promising autophagy signaling pathway/biomarkers that could be modulated for cancer therapy.Further, we also reviewed and analyzed the available autophagy inducers/inhibitors.Except for the mTOR targeting rapalogs, no other autophagy targeting agents are approved in the clinics for cancer treatment.Currently, chloroquine and hydroxychloroquine are being clinically tested as autophagy modulators either alone or in combination with other anti-cancer therapeutics (e.g., temozolomide, bortezomib, temsirolimus, vorinostat, doxorubicin, etc.).Eventhough hydroxychloroquine shows partial response and improves treatment outcomes in various cancers such as melanoma, colorectal cancer, myeloma and renal cell carcinoma, dose related toxicity may preclude its widespread use as autophagy modulator.However, this may change soon as several groups are developing molecules to various drug targets such as ULK and Vps34 within the autophagy signaling pathway.Developing selective autophagy modulators with minimal cross-talk with other targets will be challenging and crucial for making autophagy modulation as a successful strategy for cancer therapy.Moreover, patient selection based on the alteration in the autophagy signaling pathways could pave way for better treatment outcomes for autophagy modulators.
The importance of autophagy related research can be fathomed from the ever increasing number of publications (90 articles in the year 2000, 2050 in 2010 and 6700 articles in 2018; scopus data) and the fact that 2016 Nobel prize in Physiology or Medicine was awarded to Prof. Yoshinori Ohsumi for his research on Autophagy in yeast.Eventhough, we have made big strides in understanding the autophagy process, the complete molecular machinery of autophagy, the signaling process involved and their roles in various disease conditions are not yet completely elucidated in humans.Future research should hold promise as well as provide insights in these directions for identifying better treatment for cancer and other diseases.
Figure 1 .
Figure 1.Cartoon representation of pathways involved in autophagy and the chemical inhibitors/inducers of the process.Autophagy involves 4 steps: (1) initiation; (2) nucleation; (3) expansion and closure; and (4) degradation and recycling.Small molecule modulators that affect different steps of autophagy are shown in red color (inhibitors) and green color (inducers)
Figure 2 .Figure 3 .
Figure 2. Diagrammatic representation of various signals leading to the formation of autophagy initiation -ULK complex
Figure 4 .
Figure 4. Two ubiquitin like conjugation system is involved in the autophagophore elongation/maturation process
Figure 10 .
Figure 10.A: Superimposition of the binding mode of compound 2a (PDB ID: 4WNO) and compound 2b (PDB ID: 4WNP) in ULK1; B: binding mode of compound 2c (PDB ID: 5CI7) in ULK1.Hydrogen bond interactions are represented in black dotted lines.Compound 2c with a diaminopropyl group, a closely related pyrimidine analog of MRT67307, showed dose dependent inhibition with an IC50 of 120 nmol/L.Co-crystallization of compound 2c with ULK1 [Figure10B] showed that it has similar orientation with the diaminopropyl group occupying a similar space as that of quinazoline of compound 2a.The main difference in the kinase is the movement in the β sheet in the N-terminal lobe with Gly23 towards the inhibitor, which allows Ile22 to twist away from the bulky diaminopropyl substituent on the pyrimidine.The other difference is the gatekeeper methionine moves towards iodine group, to adopt a suitable dipole-dipole interaction.A flexible region involving Ile22 was required to pack above the aminopropyl group[78] | 12,532.8 | 2019-04-19T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Electrically charged black hole on AdS$_3$: scale invariance and the Smarr formula
The Einstein-Maxwell theory with negative cosmological constant in three spacetime dimensions is considered. It is shown that the Smarr relation for the electrically charged BTZ black hole emerges from two different approaches based on the scaling symmetry of the asymptotic behaviour of the fields at infinity. In the first approach, we prove that the conservation law associated to the scale invariance of the action for a class of stationary and circularly symmetric configurations, allows to obtain the Smarr formula as long as a special set of holographic boundary conditions is satisfied. This particular set is singled out making the integrability conditions for the energy compatible with the scale invariance of the reduced action. In the second approach, it is explicitly shown that the Smarr formula is recovered through the Euler theorem for homogeneous functions, provided the same set of holographic boundary conditions is fulfilled.
conditions, 1 what makes its thermodynamical description a very subtle problem. For instance, for the simplest choice of boundary conditions the energy spectrum turns out to be unbounded from below [11]. Indeed, by considering the same set of boundary conditions, the following formula of the energy holds which is certainly not a Smarr relation. As mentioned in [12], this is because of the logarithmic contributions of the gauge potential, which spoils the homogeneity property of the extensive quantities, in the sense that entropy is no longer a homogeneous function with a definite degree in the conserved charges. In what follows, we show that it possible to recover the aforementioned homogeneity property by considering the asymptotic conditions of the Einstein-Maxwell theory on AdS 3 introduced in [8], [9] endowed with an appropriate set of boundary conditions. The aim of this work is to show how the Smarr formula for the charged BTZ black hole emerges through two different approaches. Both of them are based on the preservation of the fall-off of the fields at infinity, given in [8], under a specific set of scale transformations that leaves the reduced action principle invariant for a wide class of configurations. In particular, we will use the method developed in [13] that recovers the Smarr formula for three-dimensional hairy black holes from a radial conservation law related to a scale invariance of the reduced action. However, the assumptions considered in this method are that the matter fields must be finite at the event horizon and vanish at infinity, where the latter is clearly not satisfied by the charged BTZ black hole because of the presence of the logarithmic terms. In spite of that, we will show herein that this method can still be applied in the case of the Einstein-Maxwell theory on AdS 3 by implementing the asymptotic conditions proposed in [8], [9]. As consequence, it can be proved that the Smarr formula for the charged BTZ black hole holds as long as a special set of holographic boundary conditions is satisfied.
The next section is dedicated to a brief review of the main results found in [8], related to the global charges and their integrability conditions for stationary and circularly symmetric configurations in the Einstein-Maxwell theory on AdS 3 . In section 3, we prove that the conservation law associated to the scale invariance of the action for the aforementioned class of configurations, allows to obtain the Smarr formula as long as a special set of holographic boundary conditions is satisfied. This particular set is singled out by requiring compatibility of the integrability conditions for the energy with the scale invariance of the reduced action principle. In section 4, it is shown that the same set of holographic boundary conditions ensures the right homogeneous transformation laws for the extensive quantities of the black hole under scaling transformations, allowing to recover the Smarr formula through the Euler theorem along the lines of its original derivation for the Kerr-Newman black hole in [1]. We conclude with some ending remarks in section 5.
A review on the Einstein-Maxwell theory on AdS 3 and global charges
This section is devoted to a brief review of the results found in [8]. It is shown the reduced action principle of the Einstein-Maxwell theory on AdS 3 in a canonical form for stationary and circularly symmetric configurations, the variation of the global charges and their appropriate integrability conditions.
Action principle for stationary and circularly symmetric configurations
The action of the Einstein-Maxwell theory with negative cosmological constant in three spacetime dimensions is given by Here the Newton constant G and the AdS radius l are defined through κ = 8πG and Λ = −l −2 , respectively. We consider stationary and circularly symmetric spacetimes, which describe a wide class of configurations already reported in the literature [2], [10], [11], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29]. The line element for this family of solutions is given by where the gauge field is chosen as 3) The reduced action principle in the canonical form can be obtained by replacing the class of configurations described by (2.2) and (2.3) in (2.1), which reads where the boundary term B must be added in order to have a well-defined variational principle. The surface deformation generators H, H φ and the generator of gauge transformations G, acquire the following form where N , N φ and A t stand for their corresponding Lagrange multipliers. The only nonvanishing components of the momenta π ij and p i are explicitly given by Note that prime denotes derivative with respect to r.
Global charges and integrability conditions
Hereafter, it is considered that the class of configurations that we are dealing with are given by asymptotically AdS 3 spacetimes with the following behaviour at infinity 2 which was proposed in [8].
which are assumed to vary in the phase space, while N ∞ , N φ ∞ and Φ correspond to arbitrary constants without variation, which are kept fixed at the boundary.
Following the canonical approach given in [31], as shown in [8], the variation of the energy for the class of configurations (2.2), (2.3) endowed with the fall-off for the fields (2.9), is given by where ϕ µ = l −1 ϕ t , ϕ φ and q µ = l −1 q t , q φ are assumed to be Lorentz covariant vectors, whose indices are raised and lowered by the flat (conformal) boundary metric η µν = diag(−l −2 , 1). The rest of the global charges; angular momentum J and electric charge Q e , can be directly integrated, and they read As explained in [8], (2.10) yields a nontrivial integrability condition for ϕ µ and q µ . The integrability for the energy is ensured by the condition δ 2 M = 0, which is satisfied provided is an arbitrary function of q t and q φ . In consequence, the energy and the angular momentum are then given by In sum, the global charges are determined by the function V that describes the set of boundary conditions compatible with the integrability of the energy.
Scale invariance and radial conservation law
In this section, we will make use of the approach given in [13], where the Smarr formula for three-dimensional hairy black holes is recovered from a radial conservation law associated to a scale invariance of the reduced action. The assumptions considered in this approach are that the matter fields must be finite at the event horizon and vanish at infinity, which due to the presence of the logarithmic terms is clearly not satisfied by the charged BTZ black hole. Nonetheless, we will show that this method can still be applied in the case of the Einstein-Maxwell theory on AdS 3 by implementing the asymptotic conditions proposed in [8], [9].
In this case, it is possible to prove that the reduced action principle given in (2.4) is invariant under the following set of transformations p r (r) = λp r (r) ,π rφ (r) = π rφ (r) , spanned by the scalingsr = λr,t = t andφ = φ, where λ is a positive constant. Note that similar scaling symmetries were firstly observed in the matter-free case [32] and in the context of three-dimensional hairy black holes in [13].
A direct application of the Noether theorem, by considering the infinitesimal transformation laws derived from (3.1) on the reduced action principle (2.4), yields the following conserved quantity along the radial direction, i.e. C = 0, by virtue of the field equations. We will explore whether it is possible to find a Smarr formula for the charged rotating black hole [10], [11] from the conserved quantity (3.2). Thus, in the particular case of the black hole solution with event horizon located at r + , C(∞) = C(r + ).
In what follows, we proceed to compute C(r) at infinity by considering the asymptotic behaviour of the fields given in section 2.2, and then at the event horizon by imposing appropriate regularity conditions.
Conserved charge at infinity: holographic boundary conditions
By considering the fall-off of the fields (2.9), the radial conserved charge (3.2) at the asymptotic region becomes recalling that ϕ µ = − δV δq µ . In order to determine the functional form of V some physically reasonable criteria must be used. In this case, we will require compatibility of the asymptotic conditions given in (2.9) and the scale invariance of the reduced action under transformations (3.1), which allows to find the explicit form of this function. In particular, considering the scale trans-formationsĀ t (r) = λ −1 A t (r) andĀ φ (r) = λA φ (r) implies the following transformation rules for ϕ µ and q µφ It must be highlighted that the transformations rules (3.4) precisely coincide with the ones found in [8], where it was made use of a scaling symmetry that leaves the configuration invariant and rescales the reduced action asĪ = λ 2 I. Compatibility of the transformation rules (3.4) under scalings with the integrability condition for the energy (2.13) requires, up to an arbitrary integration constant without variation, that the function V must obey the following differential equation The general solution of equation (3.5) is given by where F is an arbitrary function that describes a special set of boundary conditions compatible with the scale invariance. Hereafter, following [8], we will refer them as "holographic boundary conditions". 3 By using the integrability condition (2.13) and holographic boundary condition determined by (3.5), it is found that C(∞) in terms of the global charges (2.10), (2.11) and (2.12) is given by In the following subsection we focus in the value of the Noether quantity (3.2) in the case of the configurations that possess an event horizon, that is the case of the charged BTZ black hole, where it is mandatory to impose some regularity conditions in order to ensure a well-defined Euclidean action principle (see e.g. [33]).
Conserved charge at the event horizon: regularity conditions
In order to evaluate the conserved charge (3.2) at the event horizon, that for the class of configurations (2.2) is determined by F 2 (r + ) = 0, we have to consider smooth configurations which must satisfy regularity of the Euclidean geometry around the event horizon. These regularity conditions are generically given by Hence, making use of the constraint H = 0, (3.2) at the event horizon becomes which turns out to be the well-known Bekenstein-Hawking entropy of the charged BTZ black hole, i.e., S = A 4G recalling that κ = 8πG. Finally, the Smarr formula naturally emerges as consequence of the equality C (r + ) = C (∞), and it is given by 4 Once the regularity conditions are taking into account it is possible to identify N ∞ ≡ β, N φ ∞ ≡ βΩ and Φ ≡ βΦ e , where β is the inverse of the Hawking temperature, while Ω and Φ e are the chemical potentials thermodynamically conjugated to the angular momentum J and the electric charge Q e , respectively. Thus, the Smarr formula reads It is reassuring to verify that the charged BTZ black hole does satisfy the Smarr formula (3.11), as long as the special set of holographic boundary conditions, determined by (3.5), 4 Similar relations for the entropy of three-dimensional black holes and cosmological configurations as a bilinear combination of the global charges along with their corresponding chemical potentials have been found in the context of higher spin gravity [33], [34], [35], hypergravity [36], [37] and extended supergravity [38], where the coefficients in front of each term in the entropy formula turn out to be the spin (conformal weight) of the corresponding generator. is satisfied. In order to carry out this computation one has to consider the explicit form of the global charges for the black hole, obtained in [8], which are given by (3.14) with ω = −q φ /q t . The Hawking temperature and the chemical potentials are where the entropy is explicitly given by S = 4π 2 r + κ √ 1−ω 2 .
Smarr formula from the Euler theorem
In this section we will show that by virtue of the special set of holographic boundary conditions determined by (3.5), it is possible to obtain a homogeneous transformation law for the extensive quantities, allowing to use the Euler theorem in order to recover the Smarr formula (3.11) along the lines of its original derivation given in [1] for the Kerr-Newman black hole. Let us consider the scale transformations for the coefficients of the fall-off (2.9) appearing in the energy (2.14) and angular momentum (2.15), which are given bȳ In the case of a generic V, the mass (2.14), the angular momentum (2.15) and the electric charge (2.12) transform as Note that the electric charge already transforms as homogeneous function of degree one, while the energy and the angular momentum possess anomalous scale transformation laws. By implementing the holographic boundary conditions, one gets that the scale transformation of the function V inherited from (3.4) is given bȳ Remarkably, by replacing (4.5) into (4.2) and (4.3), the additional anomalous logarithmic terms precisely cancel out, such that the energy and the angular momentum now transform as homogeneous functions of degree two, i.e., where M and J are given in (2.14) and (2.15), respectively. At this point, we focus in the charged BTZ black hole, whose entropy, given by S = 4π 2 κ R(r + ), transforms as a homogeneous function of degree one, i.e.,S = λS. In consequence, the entropy is a homogeneous function of degree 1 2 in M, J, Q 2 e , i.e., provided N ∞ = β, N φ ∞ = βΩ and Φ = βΦ e . Note that for the simplest choice of boundary conditions, V = 0, the global charges given by (2.14), (2.15) correspond to the ones found in [11]. In this case, the logarithmic terms in (4.1) lead to non-homogeneous contributions in the scale transformations of the global charges (2.14), (2.15), as it can be seen explicitlȳ Therefore, the logarithmic terms in (4.11) and (4.12) preclude the possibility to obtain the Smarr formula (3.11), since the assumption of the homogeneity scaling property of the global charges are no longer satisfied.
Ending remarks
In this work it has been shown that the Smarr fomula for the charged BTZ black hole emerges from two different approaches. Both of them are based on the preservation of the fall-off of the fields under scale transformations which leave the reduced action principle invariant. In the first approach, we have proved that the scale invariance of the theory for stationary and circularly symmetric configurations is associated to a radially conserved charge. This conservation law leads to the Smarr formula as long as a special set of holographic boundary conditions is fulfilled. In the second approach, it was found that the same set of holographic boundary conditions confers the homogeneity scaling property to the global charges, allowing to derive the Smarr formula of the black hole through the Euler theorem.
Throughout this work, we have considered that the cosmological constant is a coupling constant fixed without variation. Nonetheless, the problem related to the spoil of the homogeneity property in the extensive variables of the electrically charged BTZ black hole has been addressed at some extent under the rescaling of the cosmological constant, which leads to the introduction of additional thermodynamical terms both in the first law and in the energy formula (1.3) (see e.g. [39], [40]). This treatment might be consistently carried out promoting the cosmological constant to a canonical variable through the mechanism described in [41]. However, recent results has shown the existence of a superselection rule that forbids a superposition of quantum states with different values of the cosmological constant in three dimensions [42], so that its value would be definite, and in consequence, it cannot be rescaled.
Finally, it is noteworthy that apart of obtaining the Smarr formula for the charged BTZ black hole, the set of holographic boundary conditions endowed with the additional requirement of Lorentz symmetry makes the energy spectrum of the black hole nonnegative, and the electric charge bounded from above, for a fixed value of the energy [8]. This strongly suggests that the solution might be stable, so it would be interesting to carry out a thermodynamical analysis of the stability of the charged BTZ black hole by considering a generic set of boundary conditions. | 4,074 | 2017-10-16T00:00:00.000 | [
"Mathematics"
] |
The Representation of Motor (Inter)action, States of Action, and Learning: Three Perspectives on Motor Learning by Way of Imagery and Execution
Learning in intelligent systems is a result of direct and indirect interaction with the environment. While humans can learn by way of different states of (inter)action such as the execution or the imagery of an action, their unique potential to induce brain- and mind-related changes in the motor action system is still being debated. The systematic repetition of different states of action (e.g., physical and/or mental practice) and their contribution to the learning of complex motor actions has traditionally been approached by way of performance improvements. More recently, approaches highlighting the role of action representation in the learning of complex motor actions have evolved and may provide additional insight into the learning process. In the present perspective paper, we build on brain-related findings and sketch recent research on learning by way of imagery and execution from a hierarchical, perceptual-cognitive approach to motor control and learning. These findings provide insights into the learning of intelligent systems from a perceptual-cognitive, representation-based perspective and as such add to our current understanding of action representation in memory and its changes with practice. Future research should build bridges between approaches in order to more thoroughly understand functional changes throughout the learning process and to facilitate motor learning, which may have particular importance for cognitive systems research in robotics, rehabilitation, and sports.
INTRODUCTION
Learning in intelligent systems is a result of direct and indirect interaction with the environment. To understand how intelligent systems learn to adequately act in a given environment with respect to a particular task, thereby adapting, is of particular relevance to cognitive science disciplines such as psychology, biology, and computer science (e.g., Pfeifer and Bongard, 2007;Wolpert et al., 2011;Abrahamsen and Bechtel, 2012;Pacherie, 2012;Engel et al., 2013Engel et al., , 2015. This capability of goal-directed motor (inter)action changes and develops with practice, transitioning from unskilled into skilled motor (inter)action, and resulting in refined planning and execution of motor (inter)actions (e.g., Meinel and Schnabel, 2007;Schmidt and Wrisberg, 2008;Magill, 2011;Schmidt and Lee, 2011). Interestingly, advancing our understanding of intelligent systems' actions and their acquisition remains a significant endeavor to this day, especially in view of prospective applications in various settings such as robotics, psychology, sports, and rehabilitation. For instance, the development of intelligent interactive technical platforms which are to assist humans requires a thorough understanding of natural, intelligent forms of (inter)action and their acquisition, respectively (e.g., Pfeifer and Bongard, 2007;Schack andRitter, 2009, 2013;Di Nuovo et al., 2013;De Kleijn et al., 2014). Understanding learning by way of different states of action (e.g., imagery or execution) and related functional changes within the motor action system, particularly with regards to action representation may help to advance in this direction. Here, we overview the literature on learning by imagery and execution from three perspectives, namely the performance, the brain, and the mind perspective.
STATES OF (INTER)ACTION AND LEARNING
An action reflects "a set of mechanisms that are aimed at producing activation of the motor system for reaching a goal" (Jeannerod, 2004, p. 376). Similarly, interaction may be considered as sets of mechanisms of several individuals acting together, which are aimed at producing activations of all motor systems involved for reaching a shared goal. (Inter)actions can overt as well as covert actions, that is executed, imagined or observed actions (Jeannerod, 2001(Jeannerod, , 2004. Given the principle of functional equivalence (Finke, 1979;Johnson, 1980;Jeannerod, 1994Jeannerod, , 1995 and the simulation theory (Jeannerod, 2001(Jeannerod, , 2004(Jeannerod, , 2006, executed, imagined, and observed actions are all suggested to be actions, as each draws on the same action representation. While 'actual' actions involve both a covert (e.g., planning) and an overt (e.g., execution) stage of action, 'simulated' actions such as imagery imply a covert stage of action only (i.e., simulation state; s-state; Jeannerod, 2001). To this extent, each of the different types of s-states to some degree involves the activation of the motor action system. That is, any form of executed or simulated state of action is considered an action, regardless of whether it includes covert stages of action only or both covert and overt stages of action. Given the principle of functional equivalence, the repeated use of any of these states as means of practice should lead to functional changes within the motor action system and to learning. Accordingly, mental types of practice have been suggested to be effective means to induce learning (e.g., Jeannerod, 1994Jeannerod, , 1995Jeannerod, , 2001Jeannerod, , 2004. To date, it is widely accepted that humans can learn by way of different states of (inter-)action, but their unique potential to induce changes in the motor action system is still being debated (e.g., Driskell et al., 1994;Allami et al., 2014;Di Rienzo et al., 2016;. Interestingly, while evidence on the functional equivalence of executed and imagined actions is vast (e.g., Finke, 1979;Johnson, 1980;Jeannerod, 1994Jeannerod, , 1995Jeannerod, , 2001Decety, 1996Decety, , 2002Jeannerod and Frak, 1999), only little is known about how learning by execution or imagery works. Furthermore, it is unclear what the similarities and differences of these ways of learning are, particularly with regards to changes in action representation. In other words, research has yet to systematically examine the differential effects of learning by way of different states of action.
In this perspective paper, we focus on learning by way of imagery and execution, and discuss it from a perceptualcognitive point of view on action representation. For this purpose, we review learning by way of imagery and execution from three different levels of analyses. First, we examine the literature from the performance perspective (here: in terms of changes in motor behavior), followed by the brain perspective (here: in terms of changes in neurophysiological representations of motor action), and finally by the mind perspective (here: in terms of changes in perceptual-cognitive representations of motor action). In doing so, we highlight the role of action representation within a motor hierarchy, and exemplify how such models could advance our understanding of learning, enabling links between neurophysiological approaches and motor control and learning theories. Finally, we discuss potential future directions to advance research comparing learning by way of execution, imagery, and other states of action.
THE PERFORMANCE PERSPECTIVE ON IMAGERY AND EXECUTION: LEARNING AS CHANGES IN MOTOR PERFORMANCE
The systematic use of different states of action for practice and their contribution to the learning of complex motor actions has traditionally been approached by way of persisting performance improvements (e.g., Schmidt and Lee, 2011). Similarly, researchers investigating the influence of mental practice traditionally have focused on motor performance (e.g., Corbin, 1967a,b; for reviews and meta-analyses, see Richardson, 1967a,b;Feltz and Landers, 1983;Feltz et al., 1988;Hinshaw, 1991;Grouios, 1992;Driskell et al., 1994). From this, mental practice has shown to be more effective than no practice, but less effective than physical practice (e.g., Feltz and Landers, 1983;Feltz et al., 1988;Driskell et al., 1994). Driskell et al. (1994), for instance, conducted a meta-analysis on the effects of mental practice in comparison to irrelevant practice and physical practice, reporting an overall average effect size of d = 0.53 1 for mental practice, and an effect size of d = 0.78 for physical practice. Moreover, combined mental and physical practice has been suggested to be as effective as or superior to physical practice (e.g., Corbin, 1967b;McBride and Rothstein, 1979;Hall et al., 1992;Gomes et al., 2014). From this perspective, mental practice is considered a potentially effective means to promote learning.
THE BRAIN PERSPECTIVE ON IMAGERY AND EXECUTION: LEARNING AS CHANGES IN NEUROPHYSIOLOGICAL ACTION REPRESENTATION
In search of answers to the question why learning by way of different states of action works (e.g., Heuer, 1985;Murphy, 1990;Murphy et al., 2008), neurocognitive approaches have evolved, considering learning from within (e.g., Jeannerod, 2001Jeannerod, , 2004. Neurocognitive approaches highlight the role of action representation in the learning of complex motor actions from a neurophysiological perspective. So far, the adaptation of the brain (i.e., neurophysiological and -anatomical changes) as a result of physical practice has received a great deal of attention (e.g., Wadden et al., 2012). From this, multifaceted insights into central changes within the motor action system have been provided regarding the neural aspects of learning a motor action, and the neural plasticity of the brain, respectively (for a recent meta-analysis, see Hardwick et al., 2013; for reviews, see also e.g., Doyon and Ungerleider, 2002;Ungerleider et al., 2002;Doyon and Benali, 2005;Kelly and Garavan, 2005;Halsband and Lange, 2006;Dayan and Cohen, 2011). In the context of the principle of functional equivalence and the simulation theory (Jeannerod, 2001(Jeannerod, , 2004(Jeannerod, , 2006, the study of action representation from a neurophysiological point of view has received tremendous research interest (for overviews, see e.g., Decety, 2002;Guillot et al., 2014). While considerable research attention has been directed to comparing the different states of action, such as the imagery and the execution of an action (e.g., Decety, 1996Decety, , 2002Jeannerod and Frak, 1999), only few studies exist that compare learning by way of imagery and execution and respective changes in the brain (e.g., Pascual-Leone et al., 1995;Jackson et al., 2003;Nyberg et al., 2006;Zhang et al., 2012Zhang et al., , 2014Allami et al., 2014;Avanzino et al., 2015; for a review, see Di Rienzo et al., 2016).
For instance, Pascual-Leone et al. (1995) investigated plastic changes in the human motor action system resulting from physical and mental practice, using transcranial magnetic stimulation. Interestingly, while the authors found physical practice to be superior to mental practice in terms of performance improvement in a key pressing task, both physical and mental practice led to the same plastic changes, namely an equally increased size of the cortical representation for the finger muscle groups involved. From this, the authors concluded that mental practice modulates the neural circuits involved in learning, potentially by forming a cognitive model of the motor action. Jackson et al. (2003) investigated cerebral functional changes in the brain as induced by mentally practicing foot movements employing positron emission tomography and compared these changes to those induced by physically practicing foot movements (Lafleur et al., 2002). Similar to the findings reported by Lafleur et al. (2002) on physical practice effects, the authors found mental practice to be associated with functional cerebral reorganization in the right medial orbitofrontal cortex. From the lack of striatum activation after mental practice, however, the authors suggest that the re-organization rather relates to the planning and the anticipation of motor actions than to its motor execution. More recently, Zhang et al. (2014) examined changes in functional connectivity in resting state as a result of mental practice, using functional magnetic resonance imaging. The authors reported alterations in cognitive and sensory resting state networks in various brain systems after learning by way of motor imagery (i.e., mental practice), while no alterations in connectivity were found in the control condition (i.e., no practice). From this, the authors concluded that modulation of resting-state functional connectivity as induced by mental practice may be associated with attenuation in cognitive processing related to the formation of motor schemas. These neurophysiological studies on learning as induced by mental practice and/or physical practice show that both mental and physical practice lead to significant changes in action-related brain activation during skill acquisition. At the same time, however, they reveal distinct differences pointing to a hierarchy in learning by way of different states of action (for more details, see discussion section).
From a neurophysiological perspective, learning can be considered as neurophysiological reorganization, with the neurophysiological representation of motor action functionally developing over the course of the learning process. This seems to hold for both learning by execution and learning by imagery. Neurophysiological studies as the ones exemplified above provide valuable multifaceted insights into the functional changes in brain activation as a result of physical and mental practice. Findings elucidating neurophysiological changes associated with motor learning as induced by mental and physical practice, however, do not necessarily allow for specific conclusions regarding action representation and its relation to motor control. Therefore, it seems important to link these approaches to models and theories of motor control and learning, particularly those emphasizing the role of action representation, in order to be able to draw specific conclusions about changes of the motor action system during learning. To put it differently: Given the functional reorganization of neurophysiological features in the brain, is there a functional reorganization of perceptual-cognitive representations of motor (inter)action in the mind as part of a functional stratification on various levels within the motor action system?
THE MIND PERSPECTIVE ON IMAGERY AND EXECUTION: LEARNING AS CHANGES IN PERCEPTUAL-COGNITIVE ACTION REPRESENTATION
According to perceptual-cognitive approaches (e.g., theory of anticipative behavioral control : Hoffmann, 1993; theory of event coding: Hommel et al., 2001;simulation theory: Jeannerod, 2001) and the original idea of a bidirectional link between an action and its effects (i.e., ideomotor theory: James, 1890), actions are primarily guided by cognitively represented perceptual effects. Drawing on the seminal work of Bernstein (1967) and his idea of a model of the desired future, motor actions can be considered as being stored in memory as well-integrated representational networks or taxonomies comprised of perceptual-cognitive units (i.e., basic action concepts; BACs) that guide action execution (cf. cognitive action architecture approach/ CAA-A: for an overview, see Schack, 2004;Schack and Ritter, 2009). Moreover, these networks of BACs are suggested to change throughout the process of motor learning by way of perceptual-cognitive scaffolding, resulting in a more elaborate perceptual-cognitive representation.
Based on research relating to CAA-A (e.g., Schack and Mechsner, 2006), experts, as compared to novices, have been shown to hold structured representations. A functionally structured representation is comprised of groupings of perceptual-cognitive units (i.e., groupings of BACs) that relate to the same (sub-)functions of the action, and thus reflect the functional phases of the motor action (cf. Göhner, 1992Göhner, , 1999Hossner et al., 2015). Schack and Mechsner (2006), for instance, examined representational networks of the tennis serve in experts and non-experts, using the structural dimensional analysis of mental representations (SDA-M). Results elicited that skilled individuals held functionally structured representations relating well to the biomechanical demands of the task (i.e., reflecting clearly the three movement phases pre-activation, strike, and final swing), whereas unskilled individuals' representations were unstructured. This has been shown to generalize to motor skills of different complexities (e.g., manual action: Braun et al., 2007;gait: Schega et al., 2014;Stöckel et al., 2015;dance: Bläsing, 2010).
With regards to learning, action representations have been shown to functionally adapt in the direction of an elaborate representation during motor learning (Frank et al., 2013). Findings revealed that, together with improvements in golf putting performance, representations changed with practice, developing toward more functional ones, with groupings of perceptual-cognitive units (i.e., groupings of BACs) relating more closely to the same (sub-)functions of the action itself (i.e., preparation, forward swing, and impact). Drawing on the finding that novices' perceptual-cognitive representations of complex action develop and adapt with practice, Frank et al. (2014) addressed the development of one's representation according to type of practice, comparing physical practice (i.e., repeated motor execution), mental practice (i.e., repeated motor imagery) and their combination. While motor performance reflected the well-known pattern of magnitude of improvement according to type of practice (i.e., combined practice > physical practice > mental practice > no practice), mental practice, either solely or in combination with physical practice, led to even more elaborate representations compared to physical practice only. Representation structures of the groups practicing mentally became more similar to a functional expert structure, whereas those of the physical practice group revealed less development. Building on these findings, further examined the perceptual-cognitive background of performance changes that occur within the motor action system as a result of mental and physical practice, employing a mobile eye-tracking system to investigate gaze behavior (i.e., the quiet eye; e.g., Vickers, 1992Vickers, , 1996Vickers, , 2009. Combined practice led both to more developed representation structures and to more elaborate gaze behavior prior to the execution of the putt, with final fixations prior to the onset of the putting movement (i.e., the quiet eye) being longest for this group and better developed representation structures relating to longer quiet eye durations after learning. Accordingly, the quiet eye might reflect a predictive mode of control that initiates a cognitively demanding process of motor planning based on the representation available (for details on a perceptualcognitive perspective on the quiet eye, see . More recently, learning as it relates to interaction was investigated by examining representational frameworks of interaction and their development with mental practice (Frank et al., under review). The impact of a team action imagery intervention on futsal player's shared representations of teamspecific tactics was investigated. Mental practice consisted of practicing four team-specific tactics (i.e., counter-attack, play making, pressing, transitioning) by imagining team actions in specific game situations for three times a week over the course of 4 weeks. Results revealed representational networks of team action becoming more similar to those of experts after mental practice. This study indicates that the imagery of team actions can have a significant impact on players' representational networks of interaction in long-term memory.
From this line of studies, the learning of a motor action can be considered as perceptual-cognitive reorganization, with the perceptual-cognitive representation of action functionally developing throughout the learning process. This research furthermore indicates that the perceptual-cognitive reorganization taking place during learning depends on the state of action used for practice. Learning by way of imagery differs from learning by way of execution, with practice through imagery promoting the functional development of a perceptual-cognitive action representation (perceptual-cognitive explanation of mental practice), while not necessarily transferring one-to-one to motor performance. This points to a differential influence of mental and physical practice with regards to different levels of action organization, with mental practice operating primarily on higher levels within the motor action system, particularly during early skill acquisition (for a more detailed discussion, see Frank, 2014). This approach, particularly together with neurophysiological approaches, may add to the picture of potential basic mechanisms that underlie each type of practice, an issue still being highly debated (e.g., Annett, 1995;Jackson et al., 2001;Munzert et al., 2008;Murphy et al., 2008;Cumming and Williams, 2012;Glover and Dixon, 2013). By complementing existing evidence from a performance and a brain perspective on learning by mental and physical practice (e.g., Driskell et al., 1994;Allami et al., 2014), these findings contribute to a better understanding of the adapting motor action system, by disentangling changes on various levels within the motor action system during learning.
DISCUSSION AND CONCLUSION
While there is ample evidence on the functional equivalence between different states of action (such as the imagery and the execution of an action; e.g., Decety, 1996Decety, , 2002Jeannerod and Frak, 1999), research addressing the similarity or difference with respect to the influence that each of the states of action has on the motor action system during learning has remained scarce to date. Meanwhile, more and more researchers have claimed to take into account potential differences between the states of action and their contribution to motor control and learning, as these might be as well (or in particular) meaningful to fully understand the motor action system (e.g., Munzert et al., 2009;Wakefield et al., 2013;O'Shea and Moran, 2017). Given that each state of action differs to some degree, the repeated use of imagery or execution is likely to differ in their influence on the motor action system. In other words, while the repeated use of imagery and execution of an action is suggested to result in learning, learning is likely to differ as a function of the state of action used for practice.
Here, we outlined learning by way of imagery and execution from three perspectives. While there is ample evidence from the performance perspective (for a review, see e.g., Driskell et al., 1994), the research from a brain perspective (for a review, see e.g., Di Rienzo et al., 2016), and from a mind perspective (e.g., on action representation as it relates to learning by imagery and execution has just started to gain momentum. Despite these initial steps, the potential of imagery and execution to induce changes within the motor action hierarchy during learning, however, remains to be explored more thoroughly. Interestingly, although sometimes not explicitly introduced as the theoretical background of their studies, (indirect or direct) conclusions about the formation of action representations are drawn from the brain changes observed, linking neurophysiological findings to hierarchical motor control and learning theories: for instance, Pascual-Leone et al. (1995, p. 1043) discussed that repeated imagery may help establish a cognitive model of the motor action; Zhang et al. (2014, p. 4) state that motor schemas have developed; Jackson et al. (2003Jackson et al. ( , p. 1178) discuss from the lack of striatum activation after mental practice, that the re-organization relates to the planning and the anticipation of motor actions rather than to its motor execution. By doing so, each of the studies implicitly refers to a theoretical background of motor control and learning, and alludes to some form of representational format in memory. However, the results of these studies have not yet been discussed in the light of hierarchical models of action organization, focusing on higher and lower levels of action representation, as the one delineated in the present perspective paper. By suggesting that mental practice helps promote a 'cognitive model, ' 'attenuated cognitive processing, ' and the 'planning and the anticipation of actions, ' these findings are in line with the perceptual-cognitive explanation of mental practice and the idea that the repeated use of imagery particularly helps establish perceptual-cognitive representations of action .
Future studies may place more emphasis on the role of action representation and compare learning by way of imagery and learning by execution with regards to brainand mind-related changes on different levels within motor action system. For instance, related research disentangling neurophysiological representations of actions within a motor hierarchy (e.g., Grafton and Hamilton, 2007), research on the degree of abstractness of neurophysiological representation of actions (e.g., Tucciarelli et al., 2015;Wurm and Lingnau, 2015;Turella et al., 2016), or research on neurophysiological representations' structural geometry across states of action (Zabicki et al., 2016) in conjunction with perceptual-cognitive approaches to motor learning might be promising avenues to better understand learning across states of action. In a recent study, for instance, Zabicki et al. (2016) investigated imagined and executed actions using a multivariate approach and a representational similarity analysis to neurophysiological representations of action, highlighting a similar structural geometry as well as distinct differences in action representation between the two states of action. Using such approaches together with hierarchical, perceptual-cognitive ones in the realm of motor cognition might help to further approach the phenomenon of action representation in motor control and learning and the unique potential of imagery and execution to induce changes on different levels within the motor action system during learning.
In sum, research directly comparing the two modes of learning has remained scarce to date, with many studies focusing on one mode only (e.g., imagery: Zhang et al., 2014;execution: Lafleur et al., 2002). Furthermore, most of the studies conducted so far focus on the potential similarities that learning by way of motor imagery may share with learning by way of motor execution, thereby disregarding potential differences across learning types, such as a differential influence on various levels within the motor action system. And finally, the brain and the mind perspective have been considered merely isolated, investigating neurophysiological representations or perceptual-cognitive representations. Accordingly, three main challenges may have to be addressed by future studies in order to advance research comparing learning by way of execution, imagery, and observation, and thus to more thoroughly understanding intelligent systems and learning by different states of action. First, research comparing learning by different states of action should be conducted in a systematic manner, employing research designs that allow for examining states of action both in isolation as well as in combination (cf. four group design in mental practice research, e.g., Corbin, 1967b;Hall et al., 1992). Second, research questions and hypotheses should be directed toward the differences between learning by different states of action, and thus going beyond the traditional focus on the functional equivalence between the states of action, and the potential similarities across learning types, toward a hierarchical view of the motor action system. Third, learning by different states of action should be approached in future research by integrating findings and methods from different disciplines (e.g., Moran et al., 2012) such as the ones exemplified above in order to approach the problem from distinct, but complementary perspectives.
To systematically examine learning by different states of action from various perspectives focusing on both the similarities and the differences across higher and lower levels of action organization may contribute to a better understanding of the motor action system. Complementing both the performance and the brain perspective by a mind perspective may lead to advancing our understanding of intelligent systems in general, and the learning of (inter)action across states of action in particular, in order to better be able to design training tools that facilitate motor (re)learning. Future research should therefore build bridges between the perspectives in order to more thoroughly understand functional changes throughout learning across states of action, and to subsequently address specific levels within the motor action hierarchy as part of individualized coaching in robotics, rehabilitation, or sports settings (e.g., Hülsmann et al., 2016).
AUTHOR CONTRIBUTIONS
Conception and draft of perspective paper (CF and TS).
FUNDING
This research/work was supported by the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC277) at Bielefeld University, which is funded by the German Research Foundation (DFG). Furthermore, we acknowledge support for the Article Processing Charge by the German Research Foundation and the Open Access Publication Fund of Bielefeld University. | 5,965.4 | 2017-05-23T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Calcium Alginate Gels as Stem Cell Matrix – Making Paracrine Stem Cell Activity Available for Enhanced Healing after Surgery
Regeneration after surgery can be improved by the administration of anabolic growth factors. However, to locally maintain these factors at the site of regeneration is problematic. The aim of this study was to develop a matrix system containing human mesenchymal stem cells (MSCs) which can be applied to the surgical site and allows the secretion of endogenous healing factors from the cells. Calcium alginate gels were prepared by a combination of internal and external gelation. The gelling behaviour, mechanical stability, surface adhesive properties and injectability of the gels were investigated. The permeability of the gels for growth factors was analysed using bovine serum albumin and lysozyme as model proteins. Human MSCs were isolated, cultivated and seeded into the alginate gels. Cell viability was determined by AlamarBlue assay and fluorescence microscopy. The release of human VEGF and bFGF from the cells was determined using an enzyme-linked immunoassay. Gels with sufficient mechanical properties were prepared which remained injectable through a syringe and solidified in a sufficient time frame after application. Surface adhesion was improved by the addition of polyethylene glycol 300,000 and hyaluronic acid. Humans MSCs remained viable for the duration of 6 weeks within the gels. Human VEGF and bFGF was found in quantifiable concentrations in cell culture supernatants of gels loaded with MSCs and incubated for a period of 6 weeks. This work shows that calcium alginate gels can function as immobilization matrices for human MSCs.
Introduction
Recent research has focused on improvement of the healing capacity of various tissues after surgery. Here the application of anabolic (e.g. bFGF, IGF, TGFβ1) and proangiogenic growth factors (e.g. VEGF) resulted in improvement of regenerate quality and strength in different animal models [1,2,3,4,5]. However, due to the low stability of the growth factors either multiple injections of recombinant proteins or stable gene transfer was necessary to achieve these results. Due to safety reasons gene transfer is presently not applicable in patients. Furthermore, the necessity of repetitive local injections would cause enormous costs and considerable burden for the patient with an increased infection risk. Hence, none of these treatments has yet reached patient therapy.
During the last decade, autologous mesenchymal stem cells (MSCs) have received more and more interest within the field of regenerative medicine. These adult stem cells are easy to harvest and have the potential to differentiate into mesenchymal cell types, such as tenocytes, chondrocytes and osteoblasts, hence making them a promising tool in mesenchymal tissue regeneration. Several studies have revealed beneficial effects of MSCs on tissue regeneration in animals [6]. Here, MSCs participated in the healing process and differentiated into local tissue cells leading to better regenerates [7]. Furthermore, recent studies revealed that the most important impact of MSCs on tissue regeneration is most likely their paracrine activity. Upon secretion of a cocktail of anabolic cytokines, healing mechanisms are improved. This important paracrine activity recently even caused some authors to call MSCs an "injury drug store" [8].
The aim of our present study was to establish a delivery system that makes the paracrine activity of autologous mesenchymal stem cells usable to enhance regeneration after surgery. The goal of the project was to establish a method which is applicable during arthroscopic and open surgical procedure and directly transferable to the operation theatre. Therefore, we designed a matrix as a carrier that allows immobilization of autologous MSCs harvested during operation. The matrix has to fulfil several properties: it should promote survival of the incorporated cells for at least 6 weeks (which is the average time span for regeneration of most tissues), while at the same time it should allow for the diffusion of growth factors from the matrix into the environment. Additionally, the matrix should be readily applicable during open and arthroscopic surgeries. Finally, the matrix should show adhesion to collagen to allow anchoring of the matrix on the host tissue, should be injectable using a standard syringe and should solidify within 30 minutes during surgery.
Within the present study, alginate hydrogels were chosen as basis of matrix, due to its suitable mechanical properties and proven biocompatibility. Alginate hydrogels were systematically modified towards the desired requirements by optimisation of the gelation process, alginate concentration and addition of hyaluronic acid and polyethylene glycol 300,000. Suitability of the obtained hydrogels was proven in vitro using primary human MSCs.
Materials and Methods Materials
glycol (PEG; MW 300,000 Da) was obtained from Sigma-Aldrich, Taufkirchen, Germany. All other chemical were of analytical grade.
Methods
Preparation of alginate gels. Sodium alginate Biochemica (alginate 1) and Alginic acid sodium salt from brown algae (alginate 2) (Sigma-Aldrich, Germany) were prepared as 2.0% (w/w) solutions and dispersed in 150 mM phosphate buffered saline (PBS) pH 7.0 and stirred in a glass beaker overnight at room temperature. In addition, PEG or sodium hyaluronate (NaHa) and combinations thereof were used to prepare alginate hybrid gels. After the alginate was completely dissolved, PEG or NaHa were added at their respective concentrations (w/w) and stirred again overnight until a homogenous viscous alginate solution was obtained. Non-sterile conditions were used for the preparation of the gels for studying the gel composition and mechanical stability. For the cell assays, all solutions were sterile-filtered using 0.2 μm filters (Aerodisc LC 25 mm, PALL Life Sciences, Ann Arbor, MI, USA) prior to the gel preparation.
Internal gelation. 5 ml calcium carbonate and D-glucono-delta-lacton (GDL) suspensions in water were prepared at 1:2 ratios and concentrations ranging from 5 mM to 200 mM and 10 mM to 400 mM, respectively, using cold, highly purified water (5°C). 15 ml of the alginate solution described above were added into a 60 ml flat bottom falcon tube. Subsequently, 5 ml of the CaCO 3 and GDL suspension were added and vortexed for 30 seconds.
External gelation. 15 ml of the alginate solution described above were added into a 60 ml flat bottom falcon tube. Subsequently, 5.0 ml of a calcium chloride solutions ranging from 0 to 300 mM were carefully added to the tube on the rim of the falcon tube to cover the alginate and induce external gelation.
Combination of internal and external gelation. Sodium alginate solutions (alginate 1 or 2) (5.00 g± 0.05 g) and calcium carbonate/GDL suspensions (2.00 g ± 0.05 g) were weighed and added to a flat-bottom 60 ml falcon tube. The resulting mixture was vortexed (Merck KGaA, Darmstadt, Germany) for 30 seconds. The gels were then covered with 5.0 ml calcium chloride solution and left to equilibrate for 30 minutes. Subsequently, the gels were washed twice with 10 ml PBS and finally covered with 20 ml PBS and left for 2 hours at room temperature.
Determination of rheological properties and gelling behaviour. The rheological behaviour was investigated using a MCR 100 rheometer (Physica-Anton Paar, Graz, Austria) equipped with a cone plate setup and a CP50-1 cone. The viscosity was observed as a function of time and shear rate. Experiments were conducted at 5°C, 20°C, and 37°C. A shear rate γ of 1/s was employed at a measurement position of 0.1 mm. Measurements were performed for the duration of 10 seconds per measurement point for a total duration of 30 minutes. Experiments were indepentently repeated three times with different samples.
Determination of water uptake capacity of alginate hydrogels. Alginate gels were prepared using the combination of internal and external gelation as described above. After incubation in CaCl 2 , specimens were carefully placed on light-duty tissues wipers (VWR, Ismaning, Germany). Samples were weighed directly after preparation (t0) and transferred into a 50 ml falcon tube filled with 25 ml of DMEM high glucose. Samples were placed in an incubator (Certomat IS, Sartorius AG, Goettingen, Germany) at 37°C and 40 rpm. DMEM high glucose was exchanged daily. Weighing of the alginate gels was repeated on days 1, 7, 14, and 20 days after preparation to determine the uptake capacity of the alginate hydrogels.
Determination of mechanical stability (compressibility and stiffness). A TA.XTplus Texture analyser (Stable Micro Systems, Surrey, UK) having the following technical specifications (Force Capacity: 50 kg.f (500 N); Force Resolution: 0.1 g; Loadcells: 0.5, 5, 30, 50 kg.f; Speed Range: 0.01-40 mm/s; Range Setting: 0.01-280 mm) was utilized to determine the mechanical stability of the gels. Gels were prepared in 60 ml falcon tube caps to provide a constant plane surface and to stabilize the central position of the samples during the measurement (Fig. 1a). A cylindrical probe with a circular area of 0.4 mm 2 was used for all measurements. Measurements were performed at a test speed of 0.5 mm/s when a trigger force of 0.1 N was reached. Emerging forces were measured over a distance of 8.0 mm, which corresponds to half the gel height. For each formulation, three independent gels were measured. Data analysis was performed using the TextureExponent32 software (Stable Micro Systems, Surrey, UK). Gel strength was defined as the maximum force applied before rupture of the gels occurred.
For long-term mechanical stability, gels were prepared as described above with the addition of sodium azide to prevent bacterial growth. Samples were placed into a horizontal shaker (Heidolph Instruments, Schwabach, Germany) at 37°C at a shaking rate of 5 rpm. Samples were taken after 0, 1, 3 and 6 weeks, and the mechanical stability was tested on tree independent gels on each time point, as described above.
Determination of surface adhesion properties. Gels were prepared in a 15 ml Falcon tube into which a piston of a 1 ml syringe (Normject, Henke-Sass Wolf, Tuttlingen, Germany) had been inversely placed prior to the solidification process (Fig. 1b). After external gelation, gels were gently pulled from the falcon tube and cut with a sharp blade. The piston was then attached to a custom made holder for the texture analyser. Adhesion of the gels was measured on three different surfaces: Teflon, polystyrene and gelatin-gels. Alginate gels were pressed with a trigger force of 0.05 N and applied force of 0.5 N onto the respective surfaces for 10 seconds. The force required to detach the gels from the surface was measured as adhesion force. The test speed applied was 0.05 mm/s. Adhesion was tested on three independent gels for each formulation.
Injectability and spreading after injection. Calcium-alginate gels were filled into commercially available syringes (10/20 ml Normject, Henke-Sass Wolf, Tuttlingen, Germany) at 0, 5, 10, 15, 20, and 30 minutes after internal gelation had been initialized. The syringe was then attached to a custom made holder to the texture analyser, and the force to eject a volume of 5 ml from the syringe having an inner diameter of 2.3 mm was measured. Spreading of the gels after ejection was investigated macroscopically. Testing was repeated within three independent experiments.
Loading of gels with proteins. Bovine serum albumin (BSA) and lysozyme were used as model proteins to investigate loading and release from the gels. BSA (molecular weight 67 kDa, isoelectic point 4.5) served as a model for human vascular endothelial growth factor (VEGF), whereas lysozyme (molecular weight 11.7 kDa, isoelectic point 11.5) served as a model for human fibroblast growth factor 2 (h-bFGF). BSA and lysozyme were dissolved in PBS at a concentration of 7.0 mg/ml as determined spectroscopically using a NanoDrop2000 UV spectrophotometer (Thermo Scientific, Wilmington, USA) by using the extinction coefficient/molar absorptivity at 280 nm (2.65 mg -1 ml cm -1 for lysozyme; 1.346 mg -1 ml cm -1 for BSA). After the addition of the calciumcarbonate/GDL mixture, the final protein concentration was 5.0 mg/ml. Alginate gels were transferred into 60 ml falcon tubes containing 35 ml of 150 mM PBS buffer pH 7.4 and incubated in a horizontal shaker (Heidolph Instruments, Schwabach, Germany) at 37°C at 5 rpm for 168 hours. The falcon tubes were placed horizontally, so that the release media was able to completely cover the gels and there was no restriction as to the release from the gels. Testing was performed in three independent experiments.
Microscopic investigations. To determine the distribution and shape of calcium carbonate crystals in the gels, gels were cut into 1 mm thin slices using a microtome blade. The slices were subsequently investigated using an incident light microscope VHX-500 F (Keyence, Osaka, Japan).
Isolation of primary human bone marrow-derived mesenchymal stem cells (MSCs). 3-5 ml bone marrow was obtained from patients after their informed consent. The study was approved by the ethics committee of the faculty of medicine of the Technical University of Munich (Project Number 5217/111, TU Munich, Germany). Before bone marrow isolation, patients gave their written informed consent. Within the present study, we isolated MSCs from three patients (two female, age 80 and 55 years and one male 76 years). The following stem cell experiments were performed independently with cells from each patient.
Isolation was performed aseptically in a laminar air flow cabinet (Thermo Scientific, Langenselbold, Germany). Cell culture media was tempered in a water bath at 37°C prior to use. Bone marrow was diluted with sterile PBS until pipetting of the sample was possible. In a 15 ml Falcon tube a volume of 8 ml of the bone marrow samples were carefully transferred on 5 ml of lymphocyte separation medium LSM 1077. Tubes were centrifuged for 20 minutes at 650 x g without brake (Eppendorf, Hamburg, Germany) until samples were clearly separated into fat, serum, mononuclear cells, lymphocyte separation medium and erythrocytes and bone fragments. The layer containing the mononuclear cells was collected, washed with PBS and transferred to a 75 cm 2 cell culture bottle to which 15 ml of cell culture media (DMEM high glucose 4.5 g/L, 10% bovine fetal calf serum and 1% penicillin/streptomycin (10000 units of penicillin, 1000 μg of streptomycin)) were added. Samples were incubated at 37°C and in a 5% CO 2 -atmosphere in an incubator (Thermo Electron LED, Langenselbold, Germany). After 48 hours, the medium was exchanged to remove non-adherent cells.
Cultivation of human bone marrow-derived MSCs. Cells were cultured in cell culture bottles containing 0.2 ml cell culture media per cm 2 . Media consisted of DMEM high glucose 4.5 g/L, 10% bovine fetal calf serum and penicillin/streptomycin (10000 units of penicillin, 1000 μg of streptomycin, 29.2 mg/ml L-glutamine in a 10 mM citrate buffer). Adherent cell culture was passaged after reaching approximately 70-80% confluence. Cells were detached from the bottle surface using 1.5 ml trypsin/EDTA. Addition of new media was used to stop the enzymatic reaction. Cells were cultivated for three passages before they were used for experiments. Before using the cells, cultures were routinely confirmed to be negative for CD 14, CD 45 and positive for CD90 and CD 105 by FACS analysis. Furthermore, multipotent differentiation potential towards osteogenic, chondrogenic and adipogenic lineage was confirmed for each used cell batch.
Seeding and cultivation of human bone marrow-derived MSCs into alginate gels. Aliquots of cell suspensions (3x10 6 cells) were transferred into 50 ml Falcon tubes and centrifuged for 10 min at room temperature at 350 x g (Eppendorf, Hamburg, Germany). Subsequently, the supernatant was collected and discarded. A volume of 3 ml alginate solution was added to the tubes and vortexed for 30 seconds to suspend the cells within the viscous solutions. For the "internal gelation" a volume of 1.2 ml of calcium chloride/GDL suspension was added to the alginate-cell-mixture, followed by vortexing for 30 seconds. 3 ml of the suspension was pipetted into a six-well plate to which 4.0 ml of a 200 mM calcium chloride solution was added. All samples were left to equilibrate for 30 min at room temperature.
Alginate gels loaded with MSC cells were cultivated at 37°C in a 5% CO 2 atmosphere. 3 ml of each gel was placed in a well of a six well plate filled with 9 ml of cell culture media. Media exchange was performed once per week. Supernatants were collected and stored at -80°C (Heraeus GmbH, Hanau, Germany) until they were analysed.
AlamarBlue assay for the determination of cell viability. To investigate cell viability within the alginate gels, we independently tested MSCs harvested from three donors. Cell culture media of cells entrapped in alginate gels was aspirated and 1 ml of a 10% AlamarBlue solution was added to each sample. Cells were then incubated at 37°C for 100 minutes. After incubation, three 100 μl samples of each Alamar blue solution were transferred to a NUNC maxisorp 96 well plate (Thermo Scientific, Wilmington, USA) and the absorption was measured at 600 nm and 570 nm using a FLUOstar OMEGA plate reader (BMG Labtech, Ortenberg, Germany). Alginate gels without cells served as negative control. Positive controls were obtained from subconfluent 2D cell cultures of MSCs in 6 well plates. Relative viability was calculated according to the manufacturer's protocol. Blank for calculation was Alamar blue reagent without cells for 2D control, respectively Alamar blue reagent on Alginate gel without cells for gel testing.
Fluorescence microscopy. To the alginate gels loaded with MSC samples 6 μl of CalceinAM was added and plates were incubated for 30 minutes at 37°C. Next, 2 μl Hoechst 33342 at a concentration of 0.5 μg/l in PBS was added to each well and incubated at room temperature for 15 minutes. Cells were then analysed using a Keyence BZ-9000 fluorescence microscope (Keyence, Osaka, Japan). Photographs were combined by z-stacking. Viable and dead cells in the gels were observed. Gels loaded with cells from three donors were independently investigated.
Human VEGF and bFGF enzyme-linked immunoassay of cell culture supernatants. The following experiments were independently performed on MSCs harvested from three donors. Supernatants were collected during the six weeks of cell culture, immediately frozen at -80°C and stored until analysed. Human VEGF and bFGF Mini ELISA Development Kit, respectively, were purchased from Peprotech (Hamburg, Germany) and used as per manufacturer's instructions. For the sandwich ELISA assay, 100 μl of capture antibody at a concentration of 0.50 μg/ml was incubated in each well over night at room temperature. Plates were washed four times and blocked and subsequently, 100 μl of standard (hVEGF or bFGF) or samples were added. Of each sample, three aliquots were tested. A biotin-labelled antibody was added at a concentration of 1 μg/ml. Plates were again washed four times and then 100 μl of a 1:2000 dilution of Avidin horse-radish peroxidase conjugate (avidin-HRP) was added and incubated for 30 minutes at room temperature. Finally, 100 μl of ABTS substrate was added and plates were measured at 405 nm and 650 nm using the FLUOstar Omega plate reader. Plates were measured at 5 minute intervals for the duration of 20 minutes.
Statistical analysis. For the statistical analysis, SPSS 20.0 (IBM SPSS Statistics, New York, USA) for Mac software was used. The normal distribution was tested and confirmed with the Kolmogorov-Smirnov test. The quantitative parameters were evaluated with the calculation of the mean and standard deviations (SDs). Depending on the data distribution the Student 0 s t test or Mann-Whitney U test and Wilcoxon test for unpaired and paired measurements were used. All the measurements are expressed with ± 1 SD. Statistical significance was accepted for p 0.05. p-values below 0.001 were considered as highly significant.
Formulation of a hydrogel which meets the desired requirements
Solidification of the alginate gel within 30 minutes. To solve this problem, we decided to add a carrier for Ca 2+ ions directly into the alginate matrix. A gelling mechanism with calcium carbonate (CaCO 3 ) and D-glucono-δ-lacton (GDL) (internal gelation) has been described [9,10]. Here, the GDL is slowly hydrolysed, which results in a drop of the pH of the solution and subsequently a release of Ca 2+ ions from the CaCO 3 into the solution. We observed a timedelayed release of calcium ions, resulting in homogenous alginate gels. However, internal gelation alone also did not provide a gel that solidifies in the time frame of 30 minutes (data not shown).
Since gelation induced only by either "internal" or "external" gelation alone did not provide gels that solidify in the desired time frame, we decided to combine the two methods (external + internal gelation). We added CaCO 3 and GDL at a ratio of 2:1 to different concentrations to 2% alginate solutions (internal gelation) and further added CaCl 2 solution (external gelation) to increase gelation speed. The mechanical strength of the formed alginate gels was measured directly after preparation and also after one week of storage. For all samples 200 mM CaCl 2 solution was used, which had previously been determined as the optimal Ca 2+ concentration for the external gelation (Fig. 2). As shown in Fig. 3, the addition of 50 mM CaCO 3 results in a full gelation of the gels within 30 minutes, and the prepared gels show a compression force of 20 N on average. However, storage of the gels for one week resulted in a significantly decreased mechanical stability for all gels prepared with 50 mM CaCO 3 or less. Essentially smaller changes were observed when the gels were prepared with 100 mM to 200 mM CaCO 3 . For these samples, the required compression force directly after preparation and after one week of storage remained almost constant (Fig. 4).
Effect of the chosen alginate on gel properties. We investigated two different alginates to study the effect of alginate composition on the matrix properties (Table 1). Alginate 1 was a synthetic alginate, whereas alginate 2 was an extract from brown algae. For alginate 2, two different batches were used to check for batch-to-batch variations. Gels were prepared as described before. In addition, PEG 300,000 and sodium hyaluronate (HA) were added to some samples, forming alginate hybrid gel. The mechanical properties were investigated as described before. Gels prepared from alginate 1 required lower compression forces compared to alginate 2. Between the two batches of alginate 2, no significant differences were observed. As a consequence, alginate 2 was used for further studies. For the alginate hybrid gels a similar trend was observed. For the formulations containing alginate 1 and the adhesive polymers (PEG, HA) lower compression forces were observed compared to the same gels prepared with alginate 2.
Enhanced adhesion to collagen. To improve adhesion properties of alginate gels to collagen, we systematically investigated the formulation of alginate hybrid gels by the addition of adhesive polymers. PEG with a molecular weight of 300,000 Da and HA having an intrinsic viscosity of 2.7 m 3 /kg were added to the 2% alginate gels prepared using the combination of internal and external gelation at different concentrations ( Table 2). For all samples, 200 mM CaCl 2 was used in addition for the external gelation, and 150 mM CaCO 3 for the internal gelation. PEG 300,000 was chosen due to its reported bioadhesive properties [11]. The adhesive properties where then investigated using three different model surfaces: Teflon as an example for a surface with minimal adhesiveness, polystyrene as a polymer with some adhesiveness and gelatine gels as a model of biological origin. A schematic of the adhesion force testing is provided in Fig. 1b. The addition of PEG resulted in slightly increased adhesive forces to all test surfaces, as addition of sodium hyaluronate promoted adhesion to gelatine (Fig. 4a). Furthermore, we found an additive effect of both polymers on adhesion properties. Alginate hybrid gels containing 0.1% PEG and 0.1% HA or containing 0.25% PEG and 0.2% HA showed the best adhesive properties to the three different surfaces.
However, the addition of either PEG or HA decreased the mechanical stability of alginate gels in a dose dependent matter (Fig. 4b). Of the tested concentrations, only the addition of 0.1% PEG and 0.1% HA to the 2% alginate gel sufficiently improved adhesion to gelatine while keeping mechanical stability of the gel unaltered.
Sufficient mechanical stability for 6 weeks. The mechanical stability of the alginate gels containing alginate or the alginate hybrid gel containing 0.1% PEG and 0.1% HA was investigated for a period of 6 weeks, the time frame required for healing after surgical intervention. Alginate gels incubated in PBS solution showed a decrease in mechanical stability upon storage. This decrease was observed for the alginate gel as well as for the alginate hybrid gel (Fig. 5). However, the shape of the gel did not change and the gel provided sufficient mechanical stability for the suspended cells.
Water uptake capacity of alginate hydrogels. The mass of the alginate hydrogels was weighted directly after preparation and 1, 7, 14, and 20 days after preparation to investigate whether the incubation in DMEM leads to a swelling of the hydrogels over time due to water uptake. No significant water uptake into alginate gels was observed after incubation in DMEM high glucose for 20 days (data not shown).
Injectability through a standard syringe. Of utmost important for the applicability of the investigated gels is not only their rapid gelling and mechanical stability, but also their injectability through a standard syringe, making the gels applicable for use in an arthroscopic setting. Rheological investigation of 2% alginate gels with 200 mM CaCO 3 were carried out at three different temperatures: 5°C, 20°C and 37°C immediately after compounding. The results reveal that the alginate gels showed sufficient injectability for 10 minutes through a standard syringe at room temperature (Fig. 6). As expected, increasing the temperature resulted in faster gelation, whereas lower temperature lead to almost no gelation of the gels as the measured viscosity stayed low. Alginate hybrid gels with 0.1% PEG and 0.1% HA showed a faster gelation speed for all investigated temperatures. Optical observation of gel spreading after injection confirmed injectability of 2% alginate gels with addition of 0.1% PEG and 0.1% HA (data not shown).
Diffusion of cytokines from the stem cells through the gel matrix. We performed diffusion experiments using the two model proteins BSA and lysozyme. The model proteins were added to alginate hybrid gels at a concentration of 5.0 mg/ml. Protein diffusion into the supernatant was measured after gelation of the gels by UV spectrophotometry. Three different conditions were used to investigate potential sampling effects on the release: 1) non-sink conditions (black square), half buffer exchange with short intervals (open circle), half buffer exchange with long intervals (black diamond). The experiment revealed good permeability through the alginate gels for both model proteins, irrespective of the employed sampling mode (Fig. 7).
Evaluation of alginate gels loaded with primary human mesenchymal stem cells
Gel composition and cell viability. The next important aspect was to analyse the impact of gel composition on cell viability. First of all, the suspension of primary human mesenchymal stem cells within different 2% alginate gels resulted in homogenous distribution of the cells within the matrix after gelation. Internal vs. external gelation and the addition of 0.1% PEG and 0.1% HA had no significant negative impact on cell viability (Fig. 8a). During the 6 weeks culture period, cell number slowly increased, as visible in light microscopic investigation (data not shown). Viability testing of the cells with Alamar blue assay confirmed that the cells remained viable within the gels for at least 6 weeks. However, due to the limited diffusion capacity of Resazurin from the gels, the Alamar blue assay was only sufficient to provide qualitative and not quantitative data on cell viability (Fig. 8b). After 6 weeks of culture, the gels were homogenously colonized with viable cells. Alamar blue testing of the culture dishes at each time point revealed that cells were sufficiently immobilized within the gels and only a marginal amount of cells migrated out of the matrix (data not shown). Gel composition and growth factor release. As proof of concept, human mesenchymal stem cells were cultured within alginate gels (2% Alginate + 0.1% PEG + 0.1% HA) for 6 weeks and the release of anabolic growth factors VEGF (Fig. 9a) and bFGF (Fig. 9b) from the matrix was monitored. In order to simulate physiological conditions as close as possible, 2% alginate gels and 2% alginate hybrid gels containing 0.1% PEG 300,000 and 0.1% HA were prepared in PBS and in comparison also in cell culture media (DMEM). As confirmed by ELISA a high paracrine activity of the immobilized mesenchymal stem cells was observed for 6 weeks. Robust release of VEGF and bFGF from the matrix/cell-constructs could be detected during the whole observation period. Here, the addition of 0.1% PEG and 0.1% HA had no significant influence on growth factor release. Interestingly, hVEGF and bFGF release from alginate gels and alginate hybrid gels prepared with DMEM were lower compared to the samples prepared with PBS.
Discussion
In an effort to make the paracrine activity of autologous MSCs usable to enhance tissue regeneration after surgery, we designed a matrix that allows immobilization of MSCs harvested during operation in the operation situs. One reported option to accelerate healing processes is the administration of stem cells directly into the surgical wound or onto the injured fibre. It has been reported that particularly stem cells have the ability to differentiate into a variety of tissue cell types [12,13]. MSCs expanded ex vivo differentiated into cells of the residing tissue, were able to repair the damaged tissue, and partially restored it to its normal function [12]. In addition, the release of a variety of signals such as growth factors (IGF-1, HGF, VEGF, IGF-2 or bFGF) and cytokines assist in the healing process [14]. One challenge, however, is to provide an extracellular matrix system which displays sufficient mechanical stability and provides an environment in which the cells survive. As basis for this matrix we used alginate. Alginate hydrogels show a good biocompatibility and are relatively inert, when administered to animals doi:10.1371/journal.pone.0118937.g008 [15]. Here, studies revealed that a high grade of purity is necessary to avoid inflammatory responses of the recipient, which may be induced by contaminants [16].
We found that the addition of only low concentrations of calcium ions (200 mM CaCl 2 ) is sufficient to initiate the cross-linking process and the induction of gel formation ("external gelation"). These concentrations of divalent ions have been reported to be non-detrimental to cells [17]. To allow faster gelation and enhanced long-term stability, we added CaCO 3 to the alginate gels as internal source of calcium ("internal gelation"). CaCO 3 has been proven to be inert and biodegraded, when implanted [18]. By combining internal and external gelation, we were able to delay the degradation of the alginate matrix in vitro. Fast degradation of alginates is one of the major problems in cell therapy and tissue engineering applications [16,19].
We have added hyaluronic acid and PEG to the alginate gels to improve adhesion to gelatin, which served as a model for a biological surface. Both PEG and HA are known to be biocompatible after implantation [20]. Addition of PEG and HA did additively improve adhesion forces. The low concentrations used in our gels did not significantly decrease mechanical stability or increase toxicity to cells. This is in line to previous published results using PEG to enhance adhesive properties of alginate gels [21]. The achieved adhesion properties are a prerequisite to allow application of the gels by a syringe in a minimal invasive procedure, as no further fixation of the gels is necessary.
The hydrogel scaffolds were designed to enhance healing after surgery of soft tissues. Here, a scaffold of low mechanical stiffness is favorable to avoid mechanical irritation on the surgical site. On the other hand, a sufficient mechanical stability is necessary to realize an application in musculoskeletal tissues, where mechanical forces act on the hydrogel. Here, alginate hydrogels show advantages, as they can be simply adapted to the desired stiffness, changing the alginate concentration [22]. Our study shows that alginate gels containing adhesive polymers prepared by a combination of internal and external gelation are injectable through a standard syringe, show sufficient mechanical properties, form a microenvironment in which stem cells survive, and allow for cytokine diffusion.
The used alginate/hyaluronic acid/PEG hybrid gels successfully immobilize human mesenchymal stem cells for at least 6 weeks. This is in line to previously published results for alginate beads used to immobilize cells for cell therapeutic applications [23]. Cells encapsulated within the hydrogels showed robust viability during that time period in our experiment. This is remarkable, as failure of cell therapeutic approaches using alginate gels was interpreted as a result of the hypoxia within the hydrogels [24]. However, a mild hypoxia within the hydrogels may even be desired in our approach. It could stimulate the immobilized MSCs to produce and secrete higher amounts of VEGF into the surrounding tissue [25].
We could prove that the used alginate/hyaluronic acid/PEG hybrid gels are permeable for growth factors released by the incorporated mesenchymal stem cells. This is in line with several studies investigating alginate hydrogels as drug delivery system. Robust secretion of growth factors VEGF and bFGF out of the MSC loaded hydrogels was detectable in the supernatant. Here, bFGF concentrations were larger 500pg/ml for at least 6 weeks. This is equivalent to at least 2 times the ED50 reported for bFGF [26]. In contrast, VEGF concentrations only reached half ED50 reported for that cytokine [27]. However, the present in vitro experiments were carried out at ambiguous atmosphere. Here, oxygen partial pressure is much higher than in the capillaries and tissues of the body [28]. As MSCs secrete VEGF dependent to oxygen partial pressure, it is likely, that immobilized MSCs will secrete higher amounts of VEGF when hydrogels are implanted at physiological conditions [25].
In our experimental setting, a number of 3x10 6 stem cells were used to load the gels. This amount was used, as it represents the amount of cells usually obtained from a 175cm 2 cell culture flask. However, the ideal amount of stem cells to immobilize within the alginate gels remains unclear. To determine this, further studies-including in vivo investigations-will be necessary.
Within the present work, we have established a method that makes the paracrine activity of MSCs usable to enhance healing after surgery. Several studies confirmed the possibility to enhance regeneration after surgery by local use of anabolic growth factors [1,3,4,5,29]. Here, a long-term application was favorable [30]. This could either be achieved by repetitive injection of recombinant growth factors or viral transfection. Both methods are not transferable to the patient. The presented method could allow local application of anabolic growth factors for at least 6 weeks. It could be directly transferred to the operation room, as it is designed for single step application in open and arthroscopic surgeries. Furthermore, the safety of application of autologous mesenchymal stem cells could be demonstrated in a number of studies as reviewed by [7]. Therefore, the presented method could be transferred to clinic as a cost effective and safe single step method to enhance healing after a broad variety of surgeries.
Conclusion
The presented in vitro results are an encouraging proof of principle that alginate gels are a suitable matrix for stem cells. Now the effectiveness of this system has to be proven in vivo. The gels can be a valuable tool to make the paracrine MSC activity usable for enhanced tissue regeneration after surgery. | 8,141 | 2015-03-20T00:00:00.000 | [
"Engineering",
"Materials Science",
"Medicine"
] |
Characterising nematic liquid crystals using a circular patch resonator
ABSTRACT Reconfigurable microwave material is a promising candidate for designing and manufacturing tunable microwave components. Nematic liquid crystals (NLC) are such materials since their permittivity can be tuned by an external electric field. However, many NLC mixtures were not properly characterised at higher frequency bands due to requiring a complex measurement setup. In this work, a novel method using circular patch resonator (CPR) is developed to measure the dielectric constant and loss tangent of NLCs at microwave frequencies. In addition to using the cavity model for the preliminary design and analysing the fringing effect for a better accuracy, full-wave simulations are employed to confirm the final design and aid the characteristic analysis. Three prototypes were fabricated and measured to reduce uncertainty from manufacturing defects. To avoid the possible damage when higher voltage is required for a large range tuning, a coupling mechanism is proposed between the microstrip line and coplanar waveguides (CPWs) to replace connection through vias. A high accuracy with an uncertainty of 0.02 for relative permittivity estimate has been demonstrated with experiment verification, approximate 80% improvement than other typical methods. The simple design and PCB-based manufacturing techniques can be widely employed to characterise the properties of newly-developed LC mixtures. Graphical Abstract
Introduction
Microwave-based technologies play an important role in today's society, which utilise frequencies in the millimetrewave (mmWave) regime so as to obtain the necessary large bandwidth and satisfy the demand of higher data rates.In some communication systems, such as 5G and low-orbit satellite systems, ground mobile terminals might access the communication infrastructure via beam-steering antenna arrays to increase the coverage and reduce the latency [1,2].
To implement such devices with the function of beam steering, different methods employing various materials and techniques have been used, such as semiconductor [3], RF Micro-electromechanical systems (MEMS) [4], ferroelectrics [5] and liquid crystals (LCs) [6].However, for systems with a large number of beams, it is not practical to be steered by the bulky mechanical systems when the ground terminals are moving.Hence, smart RF components with compact and low insertion loss properties are demanded to adjust the beams for more efficient use of spectrum resources.To design and manufacture such smart RF components, LC shows a promising characteristic.Based on their unique birefringence property, LC materials are initially utilised mainly for the optical applications (e.g.displays, lenses, etc.) [7,8].In the last two decades, following the evolution of microwave techniques, novel nematic liquid crystal (NLC) has drawn CONTACT Yongwei Zhang<EMAIL_ADDRESS>data for this article can be accessed online at https://doi.org/10.1080/02678292.2023.2200741.
significant attention because of its large anisotropy and low loss characteristic at higher frequency band.In addition, compared to the conventional ferroelectric materials, the main advantages of LC material are lower voltages required for tunability and being moderately low cost.Therefore, NLCs are promising materials for reconfigurable millimetre-wave applications [9,10].Several types of applications based on NLC have been demonstrated in the literature, such as phase shifters [11][12][13], tunable reflectarray antennas [14], dielectric waveguides [15,16], and steerable phased arrays [17][18][19].
Some studies have been carried out to determine the properties of LC materials in the literature.Early study for characterising the properties was presented in the 1950s, which applying a magnetic field to align the orientation of LC for a fast proof-of-concept in the lab [20].Nowadays, with the development of microwave devices, several technologies covering a large frequency range have been proposed to estimate the characteristics of LCs with the help of an electric biasing, which can be classified as a broadband method and resonator-based method.Broadband methods can determine the dielectric properties over a broad frequency range while the measurements of permittivity usually are not accurate enough.Thus, they are very useful for applications in higher frequency bands (e.g.above 30 GHz for optical components).Some methods, such as temperature-controlled coaxial transmission line [21] and a covered microstrip line [22,23], have been proposed to obtain a rough estimate of LC properties.
Compared with the broadband methods, resonatorbased techniques can estimate the parameters with a higher accuracy but only at single or some discrete frequencies, and are primarily applied in low microwave frequency range.Meanwhile, one of advantages is that the resonate frequencies and other properties can be accurately analysed by means of a cavity model.Several methods such as using a split-cylinder ring [24,25], a patch resonator [26], a circular patch resonator (CPR) [27], and an inductive coupled ring resonator [28], have been reported in the literature.In [24], it can yield �0:22 uncertainty on permittivity measurement for QYPD-036 material by using a split-cylinder ring.In contrast to a microstrip line resonator [22], a CPR with a cavity can tune a larger volume of LC, which potentially leading to a better orientation alignment of the molecules, accordingly a better accuracy is expected.In [27], a CPR was also used to determine the permittivity but a complex experimental setup is needed.In addition, compared with [27], this work provided a numerical approximation that allows a better prediction of the effective circular radius, and achieves a higher accuracy for dielectric constant determination.
In this work, a low-cost printed circuit board (PCB)based CPR with high resolution is designed, fabricated, and tested to determine the dielectric constant and loss tangent of the newly developed off-the-shelf NLCs.A cavity model is developed to investigate the relationship between the physical dimension and resonant frequency of the CPR.The parameters of the cavity model are verified by a full-wave simulator.Finally, one CPR is designed and three samples are fabricated and measured.The experimental results from vector network analyser (VNA) had a good agreement with those from the fullwave simulation, as well as the cavity model.It should be noted that the proposed CPR has an operating frequency lower than 10 GHz based on the following considerations: i) there are many wireless applications operating in sub-10 GHz frequencies including 5G and low-orbit satellite communication system; ii) the manufacturing cost is relatively low, meanwhile it is easier to analyse the measurement uncertainty; iii) permittivity ε keeps reasonably constant over this frequency range, while it has relatively large variation in other higher millimetre wave frequency bands.
The main contributions of this investigation are as follows: • The proposed design is solely based on a PCB technology without involving a complex process as in the existing methods, hence it is applicable in most scenarios where accurate characterisation are required for high-tunability devices at a low cost.• Compared to other methods such as in [27], this study considers the fringing effect of circular patch, therefore yields a better accuracy.The error of the permittivity in the proposed method is less than 0.02, which was typically over 0.1 based on the previous methods.• To calculate ε r with respect to the resonant frequency, a closed-form expression has been derived to determine the effective radius a e by decoupling the permittivity ε r where a e and ε r have a high correlation.
• To avoid possible damage to the instrument caused by external high bias voltages required for a large range of tuning, this study utilizes a coupling mechanism, which is beneficial for applications that require a higher bias voltage as it is isolated for DC between the microstrip line and two CPWs for feed.
The paper is organised as follows: Section 2 introduces the cavity model of a CPR and presents a closed-form expression with respect to the resonant frequency and the permittivity of the LC material; Section 3 first describes the design and fabrication of the CPR, then the experimental results and the uncertainty analysis are provided, finally the uncertainty of the proposed CPR are compared to those by other typical methods in the literature.The conclusion is presented in Section 4.
Material and methods
As an anisotropic material, the unique feature of nematic LC is that the direction of LC molecules can be reoriented, by means of an external low-frequency electric or magnetic field, which can then lead to different dielectric constants.Anisotropy is the key characteristic attractive in microwave devices.According to the orientation of LC molecules, the permittivity of LC can be simply represented as a tensor vector from the parallel direction ε k to the perpendicular direction ε ?, as shown in Figure 1.
Anisotropy can be defined as the maximal difference of these two extreme values, Δε ¼ ε k À ε ? .The choice of LC material for a microwave device depends on some intrinsic parameters, such as permittivity, loss tangent and Frank elastic constants, etc.Among these parameters, elastic constants are mainly considered in optical applications, and out of scope of this work.Loss tangent is a useful parameter to evaluate the dissipation factor of RF devices, and can be derived from the quality factor.Both the relative permittivity and loss tangent have been examined in this study due to their importance for the design of microwave components.
Taking conventional phase shifter as an example, the maximal phase delay ΔΦ mainly depends on the physical length of shifter l and two extreme dielectric constants ε k and ε ?, can be written as [13]: where f is the operating frequency, c is the speed of light.
Several LC mixtures initially designed for the display devices, such as E7 (Merck KGaA), K15 (5CB), etc., have been used for the microwave components.However, they demonstrate relatively small anisotropy and high loss tangent at mmW frequency band.Recently some highperformance LC materials have been developed to meet the demand of microwave applications, e.g.GT7-29001 provided by Merck KGaK showing a large anisotropy (Δε r > 1 at 19 GHz), but it is not widely available.In addition, some LC mixtures might have a great potential for the microwave applications, but they only provide parameters for the optical devices, and their characteristics at the microwave frequency band need to be thoroughly examined.Thus, to achieve the optimal performance of LCbased microwave devices, it is very important to develop low-cost and easily implemented technique to accurately determine the characterisation of various LC materials.
This study uses a CPR to analyse the characteristics of LC because of its simple structure and high estimation accuracy.A basic model of CPR is shown in Figure 2. In contrast to examining structures with a rectangular patch and transmission line, we used the cylindrical coordinate ðρ; ϕ; zÞ system to investigate CPRs.In addition, only one parameter (radius a of the disc) is needed to determine the orders of electromagnetic modes.For LC-based CPR design, the circular patch is in ðx; yÞ plane, LC material is filled in the middle cavity which is formed between the inverted circular patch and the ground plane.The bulk of LC material in the cavity can be treated as a substrate for characteristic analysis.Meanwhile, to control the direction of LC molecules, the patch and ground plane are also used as electrodes to provide bias voltage.
Several approaches have been adopted to design CPRs, including transmission line model, cavity model and full-wave simulations.In this work, cavity model and full-wave simulation were implemented to design the CPRs and characterise the LC materials used in the devices.Full-wave simulation is more accurate than any other models, however it needs a large number of optimisation iterations and usually gives less physical insight.Whereas the cavity model is simpler and can indicate good physical insight.The following presents the main principle and key parameters of a cavity model.The results for a CPR design based on full-wave simulation and the cavity model are given and compared in the end.
Cavity model of a CPR
RF properties of LC materials can be studied by using a CPR with the cavity model, which exploits the relationship between the resonant frequency and dielectric constant, and is a commonly-used method to estimate the permittivity ε r and the loss tangent.As shown in Figure 2, since the substrate height h is much smaller than the wavelength of RF signal λ (h< <λ), the primary electric and magnetic modes that are supported in a CPR are TM z , where z is taken perpendicular to the patch.
Based on the electromagnetic field theory, the fields propagating in the cavity can be derived by using the vector potential method, which satisfies the homogeneous wave formulation in the cylindrical coordinate.In this section, the principle equations are provided to derive the dielectric properties, and more detailed description regarding the cavity model can be found in [29] and Appendix A. The resonant frequency corresponding to the permittivity for the dominant mode TM z mn0 can be written as where f mn0 is the resonant frequency, which is the key parameter required to derive ε r , and can be obtained from scattering parameters of S 11 or S 22 based on VNA measurements or the full-wave simulations.ε r is the relative permittivity of LC material, a is the radius of the inverted circular patch, h is the height of LC in the cavity, υ 0 is the speed of light in free space, χ 0 mn (m ¼ 0; 1; 2; ::: ; n ¼ 1; 2; 3; :::) is the zeros of the derivative of Bessel function J m ðχÞ which is used to determine the order of resonant frequency.Therefore, according to a given TM z mn0 mode, the dominant value of χ 0 mn can be calculated using the Bessel functions.Taking the first four modes of TM z mn0 as an example, we can calculate, in ascending order, χ
Effective radius of a circular patch
In practice, f mn0 predicted from the cavity model based on (2) is usually slightly higher than that from the measured results.The reason is that it does not take into account the fringing effect of the circular patch, which makes the electrical radius a larger than the physical size, as shown in Figure 3(a).To optimise the parameter for the cavity model and improve the estimate accuracy of ε r , we need to compensate the extra length for the radius a.In this work, effective radius a e is considered to include the fringing effect, and a good approximation is made in [30] and can be written as Based on the geometry of the proposed CPR (a ¼ 6:5 mm; h ¼ 0:265 mm), when the range of the permittivity of 1 < ε < 3 is considered, the resonant frequencies corresponding to the physical radius a and the effective radius a e are calculated and plotted in Figure 3(b), respectively.We can observe the frequencies determined from a e by (3) are less than those with the physical radius a, which mainly because a e is greater than a due to the the fringing effect.Meanwhile, the deviation decreases with an increase of ε r and the maximum discrepancy is about 1 GHz when ε r ¼ 1. Adopting the effective radius a e and taking χ 0 11 as an example, the resonant frequency for the dominant mode TM z 110 of (2) can be rewritten as
Determination of permittivity
With the known values of f 110 and a e , ε r can be directly calculated using (4) and can be written as: However, in order to calculate a e using (3), we must know the value of ε r , which means there is a high correlation between a e and ε r in (3), so we must decouple a e from ε r first.In the literature, considering the fact that a> >h for LC-based devices (since h should be small enough to have a quick response time for tuning), and for the majority of microwave LC materials, ε r is usually between 1.6 to 3.5 at sub-10 GHz band.We can fit a e by assuming different values to ε r in the range from 1.6 to 3.5 in (3).With this approach, for the proposed CPR structure, the associated effective radius a e with respect to a is obtained by interpolation method, and can be expressed as
Determination of loss tangent
In addition to permittivity, loss tangent tanδ can be estimated by seeking the quality factor and the relationship between them is approximately given as [29] where Q is the unloaded quality factor when the energy dissipation of conductor is considered and radiation loss is neglected in the microwave resonator.Accurate determination of loss tangent is a very complex task [31] since the loss from various sources is difficult to distinguish.For transmission type resonators such as a CPR, the loaded Q L can be calculated as where f 1 and f 2 are respectively the half-power frequencies according to the resonant frequency f mn0 for a given mode, which can be obtained at 3 dB power points deviated from the resonant frequency.The loss tangent tanδ is initially defined as [32] where ε 00 r is the imaginary part of permittivity, which can be determined by where V c is the volume of the empty cavity and Q 0 is the value of quality factor.When the LC is filled in the cavity, the value of the volume (V s ) remains unchanged.But the value of the quality factor will be reduced (Q L ). ε 0 r is the real part of permittivity, which can be obtained as where f c is the resonant frequency with an empty cavity, and f s is the resonant frequency after the NLC sample is filled.With ( 8)-( 11), the loss tangent can be derived accordingly.
To sum up, according to the above-mentioned equations, ε r and loss tangent tanδ can be determined by a fullwave simulation or VNA measurements.Taking the VNA measurement as an example, f 110 is obtained from the S-parameter (S 11 or S 22 ) measurements with TM 110 as the dominant mode, then we can derive the relative dielectric constant ε r using (5) and the numerical approximation a e defined in (6).Finally, loss tangent tanδ can be estimated by (9) with the corresponding frequencies.
Feasibility analysis of the cavity model
In order to verify the feasibility of the proposed cavity model, we first acquire the resonant frequencies using (4), then the results are compared with those from the full-wave simulation.The main parameters used for this analysis are identical to the fabricated CPR provided in Section 3, e.g. a = 6.5 mm and h = 0.265 mm.Five distinct dielectric constants are assigned separately (the range from 2.22 to 2.98) for verification, we obtained the resonant frequencies based on the cavity model and the full-wave simulation, respectively, as illustrated in Figure 4.In Figure 4, the curves represent the refection coefficient S 11 from the full-wave simulation, and the straight lines are the frequencies calculated by using ( 4) from the cavity model.Taking ε r ¼ 2:22 as an example, the resonant frequency from the lowest value of S 11 using the full-wave simulation is 8.84 GHz, and it is estimated 8.81 GHz based on the cavity model, the discrepancy is only 0.03 GHz, which is close enough to demonstrate the high accuracy on determination of dielectric characteristics through the cavity model.
Experimental results and discussion
To accurately evaluate the characteristics of LC in mmW band, a CPR was designed with the optimised parameters by a full-wave simulation.Three prototypes were fabricated and tested.The experimental results including permittivity and loss tangent for the two types of newly-developed LC materials are provided and verified.
Design and fabrication
Figure 5 illustrates the structure of the proposed CPR, LC is contained in a closed cavity between the circular patch on the top and the ground plane on the bottom.As shown in Figure 5(a), on top layer three holes are In contrast to a conventional CPR design, this work proposed a coupling mechanism for microwave transmission, to avoid the possible damage to the instrument caused by a higher bias voltage.Instead of a direct contact through the vias, we used copper patches to couple the RF signal between the transmission line of the resonator and the CPWs for feeding, as shown in Figure 5 (c).More detailed analysis on the coupling mechanism is given in Section 3.2.
Figure 6 shows the key fabrication processes of the CPR.The top board consists of the primary elements, including an inverted circular patch, coupling patches, CPWs, feeding lines, and filling holes, CPWs are designed with a 50-Ω characteristic impedance, as shown in 6(a).The board after the deposition and curing treatments with Polyimide (PI, AL 1254) on the patch surface is presented in Figure 6(b), the thickness of PI is about 100 nm and the curing process at 230 � C took 20 minutes.The microscopic grooves are realised by rubbing the PI surface with velvet in the same direction, shown in Figure 6(c).Figure 6(d) illustrates the device under test (DUT) by a vector network analyser (VNA, N9918A of Keysight).
Experiments
In this section, experiments are carried out to estimate the dielectric characteristics of new developed LC materials.The procedure to determine the dielectric properties of the NLCs is summarised into three steps: Step 1, the reflection coefficients of DUTs are acquired by using the VNA, to determine the resonant frequencies.Step 2, the values of dielectric constant ε r are calculated using ( 5) by the cavity model.Finally, ε r obtained from Step 2 was further verified with the full-wave simulation.
As mentioned earlier, in order to support higher bias voltages for achieving the full anisotropy potential, a coupling mechanism is proposed for feeding the resonator in this work.Thus we first analyse the impact of the coupling mechanism, in comparison to conventional vias arrangements.The relativity permittivity of air (where ε r ¼ 1 without the presence of the LC material) is used as a testing benchmark to evaluate the accuracy of the method.The scattering parameter S 11 obtained from measurement and the full-wave simulation is shown in Figure 7.It depicted that the coupling mechanism pushes the resonant frequency slightly towards higher frequency compared with via arrangement.When the coupling mechanism is employed, we observe that the resonant frequency from measurement (12.48 GHz) is very close to that from simulation (12.45 GHz) (see Table 1 for details).
Two types of LC mixtures, JC-M-LC-E7 from JCOPTIX and QYPD-470-10-N001 from Qingdao QY Liquid Crystal Co., are used for testing.They are commercially available and both have been developed primarily for optical applications.Their dielectric properties at the optical spectrum are summarised in Table 2 but short of description on characteristics at microwave frequencies, and the molecule structure of the intermediate compounds for typical E7 mixtures is depicted in Figure 8.They can be potentially employed to design and manufacture microwave or mmWave components.Thus this study investigates their dielectric characteristics (dielectric constant and loss tangent) at the microwave frequency bands.It is pointed out that for JC-M-LC-E7, this is the first time to examine its properties in microwave frequencies as far as we understand.
To determine the anisotropy and loss tangent of the two types of LCs (JC-M-LC-E7 and QYPD-470-10-N001), the experiments were performed by applying external bias voltages.To reduce the random errors, three devices dubbed as DUT1, DUT2, and DUT3, were fabricated, as shown in Figure 9.The LC material was filled into the cavity through the filling holes, and a 1 KHz sine wave voltage is applied with a AC power source.By adjusting the voltage step by step, a largest phase shift was observed when the bias voltage was reaching 32 V (the peak to peak voltage), where the long axis of the LC molecules was expected to be approximately in parallel to electric field of the propagated waves.
When JC-M-LC-E7 was filled in the cavity of DUT1, Figure 10 demonstrates the scattering parameter S 11 obtained from the VNA and full-wave simulation, respectively.We can see there is a good agreement on the resonant frequency between simulation and experiments, whether 0 or 32 V voltage was applied.In Figure 11, S 11 for JM-M-LC-E7 from experiments using three DUTs was presented.As expected, three DUTs have the approximate performance.The similar results for QYPD-470-10-N001 were also observed, as shown in Figure 12.Using the frequencies acquired from measurements, the dielectric constant and loss tangent for both types of LC mixtures were calculated based on the cavity model, and the results were listed in Tables 3 and 4, respectively.The term unbiased and biased in these tables mean that 0 and 32 Volts are applied, respectively.
Uncertainty analysis
Uncertainty may exist in every step of the investigation.There were many potential sources of errors, such as material stability, manufacturing tolerance, optimal position for biasing, etc.Two key steps have been taken to reduce the uncertainty: (1) Coupling mechanism is used to make connection between the transmission line where the CPR is connected to and the CPW, which can obtain a more stable and accurate measurement than the case where a direct connection exists through a vias, especially for higher external voltages.(2) More than one samples have been made for cross checking and reducing the uncertainty.Three devices have been fabricated to make reliable test for the proposed method.The resonant frequencies and the corresponding dielectric constant from the measurement and simulation are summarised in Table 5, and a slight discrepancy among these devices is observed.It is shown that the error for anisotropy determination is within a scope of 0.02.
To clearly demonstrate the performance of the proposed CPR, uncertainty comparison between the proposed method and other similar methods in the literature is listed in Table 6.As shown in Table 6, the uncertainty for the permittivity determination proposed by this work is about 0.02, which is much lower than that based on other conventional techniques, a significant improvement is observed.
Discussion
This work shows that the resonator-based technique can achieve a high accuracy in determining permittivity and loss tangent of NLC with a low cost.However, there are still some problems to be addressed in the future work.For example, although we have made some analysis on the effect of coupling mechanism, quantitative analysis such as numerical model, propagation characteristics and impedance mismatching, should be investigated more thoroughly.The alignment procedure is critical to establish and control the relationship between the orientation of the LC molecules and polarisation of the waves propagating through the material [33].However, we can not evaluate the performance of alignment procedure by some simple methods.In addition, one of the concerns in practical applications is the response time of LC. which depends primarily on the thickness of the LC cells.The response time is not the main focus of this study and would be investigated in future studies.
Conclusions
A technique using a CPR to determine the dielectric constant and loss tangent of NLC materials has been developed and validated at a moderately low cost.Numerical methods, including the cavity model and the full-wave simulation, are used to assess the CPR performance with the optimised design parameters, aiming for a high-level determination accuracy for material characterisation.Three prototypes are manufactured with a PCB-based process and measured to reduce the random errors.The results demonstrate a good agreement between the simulation and experimental measurements.The uncertainty for determining anisotropy of the NLC materials based on the proposed CPR is below 0.02, which is significantly lower than that reported in the literature.The proposed method can be used to design highperformance microwave devices at microwave and mmWave frequency bands.
Figure 1 .
Figure 1.(Colour online) Orientation of LC molecules under different biasing voltages.
Figure 2 .
Figure 2. (Colour online) Geometry of a CPR with parameters of circular radius a, substrate height h and dielectric constant ε r in a cylindrical coordinate (ρ; ϕ; z) system.
Figure 3 .
Figure 3. (Colour online) Fringing effect and resonant frequencies corresponding to the circular radius a and effective radius a e vs. different dielectric constants.
Figure 4 .Figure 5 .
Figure 4. (Colour online) Comparison of the resonant frequencies from the cavity model and a full-wave simulation vs. different values for dielectric constant of the LC material.
Figure 6 .
Figure 6.(Colour online) Key fabrication steps for the CPR: (a) main board including the circular patch, (b) depositing and curing treatment of PI on the patch surface, (c) microscopic grooves on the PI surface, and (d) a CPR under test using a VNA.
Figure 7 .
Figure 7. (Colour online) Scattering parameters of the empty CPR device with the coupling mechanism and via connection without LC filled in the cavity (ε r ¼ 1).
Figure 9 .
Figure 9. (Colour online) Three CPR devices manufactured for material characterization.
Figure 8 .
Figure 8. Molecule structures for typical E7 intermediate compounds, and associated empirical formulas.
Figure 10 .
Figure 10.(Colour online) Scattering parameter S 11 for JC-M-LC-E7 from the full-wave simulation and experiment using DUT1 at 0 and 32 V, respectively.
Table 1 .
Resonant frequency and calculated permittivity for air from the CPR with the coupling mechanism and vias connection (where ε r ¼ 1 is the norm).
Table 2 .
Comparison of dielectric properties of available new NLCs and classical mixture E7 (Merck) for display applications.
Table 3 .
Resonant frequency, calculated dielectric constant and loss tangent for JC-M-LC-E7 using three DUTs.
Table 5 .
Uncertainty analysis based on resonant frequency and interpolation from full-wave simulations when the cavity is empty.
Table 6 .
Uncertainty comparison of the proposed work to that of methods found in the literature. | 6,738.2 | 2023-04-24T00:00:00.000 | [
"Physics"
] |
On Something Like an Operational Virtuality
We outline here a certain history of ideas concerning the relation between intuitions and their external verification and consider its potential for detrivializing the concept of virtuality. From Descartes and Leibniz onward to 19th-century geometry and the concept of “invariant” that it shares with 19th-century psychology, we follow the thread of what might be informally called an “operational” conception of the virtual, an intuition progressively developed in the 20th century from of group theoretical thinking into “functorial” thinking (in the context of category theory), and eventually intuitions for the concept of “univalence” (homotopy type theory) and its implications for the meaning of equality and identity. At each turn, skeptical arguments haunt this history’s modes of exteriorization, proof, and verification; we consider the later Wittgenstein’s worries concerning rule following and the apparent unbridgeable gap between formal theory and informal practice. We show how the development of mathematical intuitions and formalisms in the last century and the discovery of deep connection between intuitionistic logic and computation have begun to respond to some of these concerns and favour a conception of virtuality that is operational, constructive, pragmatic, and hospitible to scientific detrivialization.
(Re-)Naturalizing the Virtual
John Duns Scotus employed the concept of virtuality in the context of natural theology. It was here a tool for discussing the nature of God through the rational engagements of the intellect, without recourse to divine revelation or dogma. "Is it possible by natural means for man's intellect in the present life to have a simple concept in which concept God is grasped?" (Scotus John Duns 1987, p. 17) An affirmative response to this question necessitated a bridge between the created and the uncreated, between existence and essence, such that being in each case could soundly be discussed using the same basic concepts, and that we might therefore begin to tame the transcendental. In Duns Scotus, virtuality was this means to bridge the transcendental gaps between categories, and part of a strategy for naturalizing the divine, for it would allow us to rigorously consider the concept of infinite being as equivalent, virtually, to the perfections of God. To be is to have attributes, and attributes always come attached to entities. There can be no discussion of si esse, without a discussion quid est-for, if a thing exists, it will automatically have properties and some descriptions will automatically be appropriate to it. Thus, for Duns Scotus, all existing things virtually contain their properties and attributes, and any eventual knowledge of such things is always already virtually included in the thing in question. Indeed, the virtual was not here wielded against naturalism but was actually part of a method for naturalizing the divine. The virtual was what ensured the correspondence between what is and what is said of what is, or between the intuition and the entity's external manifestation as a real existent. To exist at all is to virtually contain proofs of existence. Each thing in the world, if it exists, thereby contains all the virtues by which we may come to know it.
Later on, of course, with Bergson the virtual becomes the site of pure flux, difference, subjectivity, creativity, and indeterminacy, allied to the various heterogeneities and continuities that he championed against the rules, equations, and discretizations of modern science. an uncomfortable situation: either the virtual link between concepts and percepts, intuitions and exteriorizations, is nothing more than wishful thinking, or the as-if traps us within a finitude wherein intelligibility is tautologically disarmed by solipsistic worries. In the later Wittgenstein's investigations and remarks, the chasm between the two perspectives is recast as a problem of rule-following, falling through the gap between saying and doing, theory and practice, deontic and ontic. We pick up the thread of Cassirer's highlighting of parallel developments in 19th and early 20th century mathematics, physics and psychology, which since have been extended beyond group theory into category theory and eventually, through a convergence with intuitionistic logic and computability theory, to the notion of univalence. In a way reminiscent of how the medieval arguments for univocity tried to bridge the worlds of the created and uncreated, these latest mathematical trials of equality and equivalence have begun to build a very subtle bridge between rule and execution, theory and practice, through a progressive illumination of the ever more delicate structures that span these oppositions from behind the scenes. Perhaps the perceived impasse was the effect of an overly naive conception of identity, that of Leibniz's identity of indiscernibles: the gap between being and that which is said of being could only be bridged by detrivializing identity and difference, such that the textures and fibrations behind their bare opposition could begin to be mapped out, showing a way beyond the facile and unpragmatic judgment that "it is what it is".
Exteriorization and Proof
. . . may I not [ . . . ] be deceived every time I add two and three or count the sides of a square, or perform an even simpler operation, if that can be imagined? (Descartes et al. [1641(Descartes et al. [ ] 1998 In his Meditations on First Philosophy, Descartes showed the ease with which we can doubt even our most basic assumptions. In everyday life, we trust our perceptions and concepts, we trust that 2 + 3 = 5; we trust that when we are performing the calculation some evil demon is not swooping in to change the list of numbers beneath our eyes. But what does this trust hold to ? Leibniz, like other thinkers of his time, saw the importance of finding ways of overcoming this skepticism. Is there any assurance that our thoughts somehow align, correspond, or represent things as they are? Leibniz's attempt to escape skepticism turned on his deep appreciation for the fact that just because we can state the existence of something, does not mean it is realizable, or constructable. This is obvious in mathematics: we can say "2 + 3 = 6", or "parallels intersect", but just stating it does not make it true. There are specific ways the properties combine and compose that restricts their expression. Leibniz reckoned that truth could only be determined in the bottoming out of our lines of explanation, when the operational analysis halts, and where the intrinsic universal character of the substance expresses its pure distinction from other substances and aggregates.
Thus, for Leibniz, substances are revealed through acts of demonstration in extension. The concept resides in its potential exteriorization. In order to make sure the demon is not deceiving us, we have to exteriorize our thoughts-which for him are themselves made up of little perceptions, real substances-label them with symbols and classify the combinatorial rules that specify how they compose with other ones. Once this is done, calculemus, says Leibniz. We need to "shut up and calculate", we need to prove statements in demonstration, through an iterative analytical process that may well exceed our capacities of intuition. We need to exteriorize the vague and indistinct concepts in our souls, actualize them in extension, and process them down to their indubitable proofs of existence. Thus, the universal characteristic, a language that encodes our concepts, must be combined with the calculus ratiocinator, the physical machine, external to our minds, that processes the universal language. He thought that by crunching the numbers on such a sophisticated machine, even the most subtle metaphysical questions could be distilled to their bare distinctions. The point of exteriorizing this process and not just calculating in your head, was that we needed to make sure we were not deceiving ourselves: in this way, we could check the proof, and locate any errors. Cassirer explains this important motivation in Leibniz's thought: ...even where all the rules of thought are applied with formal correctness, there always remains a possibility that the contents of thought, instead of being repeated in identical distinctness, may change unbeknown to us. As we know, Descartes saw no epistemological but only a metaphysical way out of this labyrinth: his invocation of "God's veracity" does not appease or resolve the doubt but simply strangles it. Yet here precisely lies the point of departure for Leibniz' development of the technique and methodology of mathematical proof. It can be shown historically that Descartes' skepticism about the certainty of the deductive method was the force that impelled Leibniz to his theory of proof. If a mathematical proof is to be truly stringent, if it is to embody real force of conviction, it must be detached from the sphere of mere anemic certainty and raised above it. The succession of steps of thought must be replaced by a pure simultaneity of synopsis. (Cassirer [1923(Cassirer [ ] 1985 Cassirer goes on to mention the echo of this idea in David Hilbert's formalist program, which was indeed in the same spirit: the axiomatic method sought clear and distinct foundations for mathematics. But his note, at the end of the quote, that the successive operations of thought must be replaced by a "pure simultaneity of synopsis" is a rather Kantian reading of what Leibniz seems to have had in mind. Cassirer emphasizes the symbolic character of the formalization and proof, rather than the mechanical character, the characteristica rather than the ratiocinator. But Leibniz suggested that (at least) some of these proofs could not possibly be held within the intuition, and that, even when exteriorized, some aspects of the universe would remain indistinct: like incomputable, non-terminating programs, some series of operations would go on crunching forever, expressing indistinct analyses that "only God" could grasp in full distinction. It is Kant who believed that such an externalized process of calculation was insufficient for knowledge: concepts needed to be grasped in sensuous intuition-they could not be merely written down or materialized in a computer process, but needed to be intuited synoptically, in the mind's eye.
Hume had taught Kant that all the earlier metaphysical attempts at binding the soul to the body, the inside to the outside, and of accounting for the real connection between percepts and concepts, were ultimately dogmatic and held to nothing but blind faith in the goodness of God. And Leibniz held no illusions about his strategy of exteriorized proof: despite the objective demonstrability of the exteriorized method, he knew it did not completely eliminate the doubt that there is any congruence between our indistinct notions and the distinct symbols we encode into the mechanical oracle. Ultimately, Leibniz entertained his own metaphysical argument, a nuanced version of the occasionalismpopular in his day-of Cartesians like Malebranche. But Leibniz pushes it out to its limit: God intervenes as little as possible, and with the exception of miracles, only truly acts once, at the beginning of time, when he chooses these substances rather than others. God, being of the highest good, was "forced" to choose the best possible world, insuring that at the very least knowledge and rational thought were possible. This explained, for Leibniz, why we are sometimes deceived, why there are criminals and wicked people in the world, and why horrible tragedies befall us, despite God's ultimate intervention: God was himself constrained by rules, and even the best possible world is constrained by internal relations of compossibility, such that in order for the best to happen, some bad must also take place. Echoing Pascal's probabilistic wager, Leibniz claims: It is therefore infinitely more reasonable and more worthy of God to suppose that, from the beginning, he created the machinery of the world in such a way that, without at every moment violating the two great laws of nature, namely, those of force and direction, but rather, by following them exactly (except in the case of miracles) . . . . . . we can easily judge that this hypothesis is the most probable, being the simplest, the most beautiful, and most intelligible, at once avoiding all difficulties . . . (Leibniz 1989, p. 84, "Letter to Arnaud" [1687, my italics) Denying himself recourse to any such cosmological arguments, Kant found his way out of the labyrinth with the powerful concept of the synthetic a priori (Kant [1781(Kant [ ] 1996. We replace the pre-critical theory of "pre-determined harmony" with a new fulcrum between sensations and ideas, a point from which the transcendental constraints on cognition are articulated, where their architectonics can be considered undogmatically. In order for a concept to take hold in the mind and distill experience into its universality, it needed to be supported by the a priori constitution of space and time, without which there would be no possibility of conceiving. This invariant background needed to be given full credence and priority. We had no access to things in themselves, and thus avoiding dogma meant rebuilding knowledge on the stable structure provided by such invariants of experience, these ultimate conditions of cognition. Knowledge needed to be held in the intuition. Concepts needed to be imported from Leibniz's ratiocinator back into the mind. When they are outside, in the oracle, in the machine's behavior, or just sitting latent in the symbols, they are not being understood. Thus, for Kant, irrational numbers are not concepts, as any iterative analytic sequence of operations that takes eternity to finish cannot be held within the finitude of the mind.
Already this series of philosophical gestures exposed the gap that would need to be fleshed out. Leibniz can only hope that God will ensure the connection between the concept and its external reconstruction, and Kant can only be certain of that which is immediately perceived within an internalized conceptual tautology, giving rise to subjective idealism and "correlationism". We are left with an untenable choice between an unfounded Leibnizian optimism and a depressing Kantian claustrophobia.
Rules and Demonstration
Stiegler (see 1996, 1998, 2001) philosophy of technology, building on the work of Leroi-Gourhan, Derrida, and Husserl, highlights a related aporetic condition exposed in the question of technological exteriorization. Knowledge is only produced by exteriorizing our memories, by progressively outsourcing our faculties into external supports. We invent symbols and grammars, tools and machines, each exteriorization becoming an environmental effect and conditioning future experiences from the outside, from the past, skewing our desires and drives, modifying our priorities. We are retroactively conditioned by all of our exteriorizations. We count dashes in the sand, we draw lines between them, we cut the figure in half, each gesture looping back into our minds, reconfiguring our souls. In Plato's Meno, Socrates demonstrates anamnesis as the consequence of such a diagramming of our intuitions in the sand. Anamnesis, true learning, is the product of exteriorization. Even here, a precursor of Descartes' demon rears its head: Meno asks, how can one be sure to have discovered the new insight and have really learned something, if one did not know it beforehand? How does following these geometric rules reveal "new" knowledge, if it was all contained in the rules to begin with?
The late Wittgenstein (Philosophical Investigations ([1953] 2009) and Remarks on the Foundations of Mathematics ([1953] 1967)) circled this same issue. What is the "plus value" of the proof or of the demonstration? What does it give us? On the face of it, it only seems to provide a vague sense of assurance. The proof's demonstration causes a change in dispositions, it relaxes something in us, such that "now I can go on", a feeling of renewed confidence in how the rules are being applied. It updates our conceptual landscape: it is a point of articulation in our behaviors and our horizon of expectations, a cusp that sends us off into the flow of practice until we encounter the next hiccup. "The proof changes the grammar of our language, changes our concepts. It makes new connexions and it creates the concept of these connexions. (RFM III: §31)" Other than that, it is just a series of symbols or diagrams on a page, which we "follow" through a sequence of operations from one presentation to another. The proof is just a series of transformations, an expression of what the rules of construction allow. When we follow the demonstration to its conclusion, when the formulas wrap around to the beginning and provide us with an equal sign, the effect that these transformations have on our future actions, and the feeling that "now I can go on", always remain outside the deductive demonstration.
Wittgenstein's issues had to do with the gap between theory and practice, or between our models and rules and their supposed execution in real life. From a series of obscure observations on the gap between saying and doing, between the rules we declare we follow and the actual actions we perform, Wittgenstein disentangles a strange web of consequences. There is an echo of Meno's paradox: "What do we learn when we see the new proof-apart from the proposition, which we already know anyhow? Do we learn something that cannot be expressed in a mathematical proposition?" (Wittgenstein [1956(Wittgenstein [ ] 1967. There is also an echo of Descartes' Demon: in the moment of applying the rule of addition, how do I know I am not following some other rule, like that of "quaddition" (Kripke [1982] 2000)? And there is a reflection of Hume's problem. When we teach a child how to multiply, we correct them whenever they make a mistake, and continue to do so until we feel they have grasped the rule. But even then, we never come to expect the student's future application of the rule to be absolutely flawless. Even the best pupil will go on to make mistakes. So where is the cut-off point? At which point does the pupil's frequency of error transition from being an indication of their ignorance of the rule, to an indication of their momentary misapplication of the rule? This formulation mimics Hume's ([1748] 1921) skeptical arguments about causality, where just because we have always seen a certain stable array of actions causing reactions or effects does not mean that such stable behavior will always be observed in the future. The regularity of causality is not confirmed by experience, it can only be induced. Hume will deny that there is anything empirical that can ever confirm the existence of causation, yet will also wager that it is best to go along with it, to trust our implicit experience of causation and treat the world as-if causation holds.
Wittgenstein will suggest that the skeptical question itself is malformed, a corruption of language's proper use in language games. For him, the problem of rule following implies a category error, a discontinuous leap into a new domain which does not carry forward the consequences of the first, or "preserve" the meaning of one in the other. Between the intuitive feeling of being confident about the rule, and its actual application, execution, or demonstration, Wittgenstein finds a seemingly unbridgeable chasm. "What one means by 'intuition'", Wittgenstein says, "is that one knows something immediately which others only know after long experience or after calculation." (Wittgenstein et al. [1939(Wittgenstein et al. [ ] 1976 But knowing by intuition tells us nothing. Our intuitions are constantly being empirically overturned, and indeed that is the whole point of mechanical proof in Leibniz, or of the scientific experiment's capacity of falsification, in Popper. The empirical should be the final test. The problem is that, "What ever is going to seem right to me is right. And that only means that here we can't talk about 'right'." (Wittgenstein [1953] 2009, p. 92). So even when the machine halts on a given yes or no response, I am still in the position of having a choice of assenting or dissenting from the result. Or as in the Duhem-Quine thesis, if the scientific experiment falsifies the theory, I am always free to choose which parts of the theory I modify in order to make it fit with the empirical evidence. It is as though the rules only exist in theory: they fall apart in their application, for their supposed enjoining of actions does not directly translate into the actions themselves, their consequences being beyond the purview of the deductive system.
Recall that the whole point of Leibniz's plea for exteriorization (calculemus) is that since we cannot trust our intuitions, we should process them externally, we should construct the knowledge outside of ourselves where we can proof check every step. From Leibniz's rational point of view, we needed to account for the proper exteriorization of our "internal" assumptions, while from the point of view of Kant, by then juggling with Hume's incisive skepticism about causality and induction, the problem was that of accounting for their proper re-internalization into concepts that stood for themselves. Kant found the nexus somewhere in the middle, in the invariants of cognition. But the later Wittgenstein can be taken to say that even the relativized a priori never really overcomes the demon. The concepts held in intuition and their rules of construction and decomposition might be just as fleeting as bare empirical experiences, and our supposedly timeless "concrete universals" might actually be sensitive to time and context, for who knows whether the axioms are not being changed every time we conjure the mathematical object?
Paraphrasing Lotze, Cassirer warned that the practice of thought is "never satisfied to advance to the universal concept by neglecting the particular properties without retaining an equivalent for them" (Cassirer [1923(Cassirer [ ] 2015. Indeed, the problem of rule following is the problem of structure preservation, that is, of the tracking of equivalences through changes of context, as it pertains specifically to transitions between concepts and percepts, theory and practice, intuitions and exteriorizations.
Artificial Equivalences
When, in 1832, Évariste Galois frantically scribbled down his theory of symmetries on the final night of his short life, he inaugurated a new way of abstracting from identities. A thing's existence could begin to be rigorously conceived as equivalent to the sum of operations or interventions that would leave the thing indiscernibly different. "The 'nature' or 'essence' of a figure is defined in terms of the operations which may be said to generate the figure." (1944, p. 24) Cayley took an important step further. His work would be interpreted as showing (though not explicitly) that the group of symmetries allowing such "invariance under transformation" can again be generalized: what would become known as Cayley's theorem says that a group is isomorphic to a subgroup of its symmetry group, allowing us to imagine that each identity, and the this-ness of a thing, is owed to a particular way of being embedded within higher-order symmetries, implying a hierarchy of abstractions.
By 1872, Klein (Klein 2008) was motivated to apply this logic to the newly discovered non-Euclidean geometries. In what became known as the Erlangen program, Klein used group theory to unify these otherwise distinct geometries, and order them from Euclidean, to affine, to projective in terms of a hierarchy of generality. As Cassirer puts it, through the geometry of affine transformations, "we can no longer maintain the distinction between 'circle' and 'ellipse'" and in projective geometry, "an ellipse can be transformed into a parabola or a hyperbola, such that, in the final analysis, there is but one single conic." (Cassirer 1944, p. 9) So group theory allowed us to see all geometries as just axiomatically stipulated collections of permitted transformations, and in particular that Euclidean geometry was just a "special case" of affine geometry, itself but a special case of projective geometry, building an intuition for the idea that physical space itself could be warped such that things appear to be Euclidean at local scales while appearing curved at larger scales. This intuition would eventually contribute to the development of Einstein's special theory of relativity.
This systematic application of group theoretical thinking, this practice of taking an object as the sum of operations that leave the object unchanged, enabled a new step in a progressive hollowing out of substances, continuing an ancient philosophical impetus: we abstract from appearances to get to the substances, which can only be grasped by identifying the subtle invariants that remain throughout its transitions. For instance, in Descartes' famous example, the ball of wax has a certain shape and produces a certain sound when knocked, but these change when we hold it close to the fire. Thus, in order to really know the substance, we have to go beyond such accidental properties, and find the substance's invariance under transformation, its true properties. But we then develop new capacities (intuitions, technology, experimental science) that allow us to make new kinds of interventions (constructions, exteriorizations) and realize that, again, what we now take to be invariant needs to be nuanced and displaced with subtler notions.
Cassirer knew to read the successful generalizations of group theory, and its enabling of a new kind of intuition about abstract geometrical entities, as having an implicit relation with similar revelations in psychology, and the curious tension that opposes the fleetingness of perception and empirical science with the apparent timelessness and universality of mathematical truths.
Perception is not a process of reflection or reproduction at all. It is a process of objectification, the characteristic nature and tendency of which finds expression in the formation of invariants. (Cassirer 1944, pp. 19-20) He realized that the concept of invariant was also used in 19th-century psychology. The contents of the mind had hitherto vaguely been modeled in terms of an affection of the soul by the determinations of the real, an empirical inscription of the outside on the inside, mere "reaction to external stimulation". The previous theories had "rested on the 'constancy hypothesis,' i.e., the hypothesis of immediate correspondence between 'stimulus' and 'sensation'." (Cassirer 1944, p. 12) In Hobbes, anticipating Newton, action and reaction were "related in no other way than strict equality". But the main contribution of Helmholtz, Hering, Katz, and their generation, according to Cassirer, was to have slowly abandoned this assumption, and detrivialized the relation of equality between the object perceived and the act of perception. As Helmholtz had suggested, we do not see what is "really there", but what deviates from our expectations, which we are constantly renormalizing, and which are in a sense artificial interventions. "It henceforth appears that it is dissimilarity rather than similarity to the objective stimulus which characterizes perceptual content." (Cassirer 1944, p. 12) No longer could we conceive of a Leibnizian parallelism between extension and intension, between distinct and indistinct, because we now knew that we are perpetually "being deceived" by our senses, that our perceptions in every instant are themselves preselecting what they deliver to experience. "We do not merely "re-act" to the stimulus, but in a certain sense act "against" it." (Cassirer 1944, p. 13) This was clear from the discovery of perceptual color constancy: we see a sheet of paper as being of a constant white whether the room is brightly lit or darkened. We intervene in the scene, we lock some parameters, we artificially freeze part of the environment's variability in order to be attentive to other changes. This active saturation of some parameters corresponds to a kind of selective coarse graining of the perceptual field, such that differences fainter than a certain threshold are clamped to the limit. A subject will identify the difference in shade in an experimental setting, and so we know the eye perceives it, but in everyday perception those same differences are constantly being selectively glossed over. As perception stabilizes the invariant, it is continually re-normalizing the scene, implying that variations beyond a certain threshold are being taken as equivalent, even though the eye sees their difference. Thus, the selection is happening before knowledge or understanding. But worse, it happens even before sensuous intuition, before even our indistinct experiences express themselves to the mind's eye. It is the way perception itself is selectively synthesizing the scene, constantly reconstituting the invariant, distinguishing the foreground from the background, the action from the setting. In order for this to happen, perception is constantly identifying or equating discernibly different things, stretching Leibniz's account of identity and difference; our perceptions are perpetually plucking out invariances by relaxing the precision of their analysis, tracking blobs of invariance "as" objects, long before we are ever conscious of it. The fact that the invariance of the sheet of paper's color is stabilized in perception already implies a "dissociation" between what we take to be the actual color of the object and what we take to be just an effect of the lighting. In other words, there is a separation maintained within perception itself between the object's primary and secondary qualities, and a division of essential and accidental, object and transformation, a realization that only deepens the post-critical epistemological conundrum as per subjectivity's alienation from the real.
Functorial Intuitions
In ordinary life we have all sorts of criteria for equality. ...equal weight, equal color, equal number, etc. Aren't there very different criteria for equality in all these cases? (Wittgenstein et al. [1939(Wittgenstein et al. [ ] 1976 Since its inception in the 1940s with Eilenberg and MacLane (1945) introduction of the concepts of functor, category, and natural transformation in the context of algebraic topology, it is category theory that eventually demonstrated the extensive power of this operational perspective on identity. Notably, they introduced category theory as an extension of Klein's Erlangen program, "in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings." (Eilenberg and MacLane 1945, p. 237) Category theory stays closer to the "looseness" with which identities and equivalences are treated in everyday practice. Category theory makes rigorous the idea that "for our present purposes" we need only track the thing in question up to a certain degree of fidelity that respects our current requirements.
If I just want to count the three sheep in the field, for such purpose I might only need to hold up three fingers: the structure of their number will be "preserved" in my finger gesture. We will say, with Frege, that there is a one-to-one correspondence between each individual finger and each individual sheep, such that the numbers are equal. If I want to communicate a richer idea, say that one sheep is mother to the other two, I might draw three dots in the sand and add two arrows out from one of the dots to the other two. We will agree that there is a correspondence, again, between the number of dots and the number of sheep, but now also between the internal parent-child relationships of this little family of sheep and the composition of my little diagram. This is essentially what a functor is: a structure-preserving map. A functor maps both the objects and the morphisms (relations, transformations, functions) between these objects from one category (or context) to another, preserving an intended structure or order of composition. The functor is between the sheep and my diagram: we can reveal it by drawing two new set of arrows, first from each sheep to each dot, and then from each "relation of motherhood" to the two original arrows I drew. We begin to see that each taking of one thing for another, each consideration of a thing as if it were another thing, each act of substitution, is intrinsically functorial. Note that this procedure is "substrate independent"-my diagram in the sand does not see the sheep, but captures only the information I intended to communicate-it need only preserve their composition, under the assumption that "up to a unique isomorphism" defining the invertible transformations between the two contexts, the representation will have the same effect, or can be used as a proxy for the real thing.
In functional programming, which incorporates the ideas of category theory, the metaphor of the assembly line is sometimes used to think about functors: the production line goes from beginning to end through several steps where materials are transformed and assembled. If we make changes to the factory, if we bring in new machines, or decide to group some steps together or break them into smaller ones, or if we reorganize which steps happen in series and which happen in parallel, the constraints on these operational changes will be functorial: we will want the same materials to be transformed into the same end-products, and thus, for the new factory to be equivalent, for all intents and purposes, to the old factory. Interestingly, Pierre Lévy describes the process of virtualization using a similar example: "Let's look at the very contemporary example of the virtualization of a company. The conventional organization gathers its employees in one building or a group of buildings. Each employee occupies a precisely defined physical position, and his schedule indicates the hours he will work. A virtual corporation, on the other hand, makes extensive use of telecommuting. In place of the physical presence of its employees in a single location, it substitutes their participation in an electronic communications network and the use of software resources that promote cooperation." (Lévy 1998) Echoing Bergson, Lèvy maintains that the virtual "tends" toward actualization, so that the production of a virtuality from an actuality is, in a sense, where the real work happens, where the actual is problematized. This may be a leftover of Bergson's insistence on the two tendencies of matter and life. Russell noted that for Bergson, "The whole universe is the clash and conflict of two opposite motions: life, which climbs upward, and matter, which falls downward." (Russell 1912) The question is whether this apparent asymmetry between the powers of matter and of life, and these putative tendencies of actualization and of virtualization are fundamental or whether they are contingent effects of the types of cognizers humans happen to be, being that we are thermodynamically (antientropically) oriented within a material cosmos. For Bergson, of course, this polarity is fundamental. But, be that as it may, we do not need the notion of tendency or force if we think of the "transformations" between the virtual and actual as functors. Levy's "virtual" company, by going online, has maintained something of the structure of the previous "non-virtual" company. Functors are the structure preserving maps between such unproblematic or trivial actualities, and the non-trivial structures, patterns, and functional relations they can be unfolded into.
Category theory can be thought of as a means of making practical acts of taking one thing for another rigorous. It is the mathematics of squinting and seeing "family resemblances" between otherwise disparate things. This functorial thinking, where what something is can be rigorously considered in terms of the thing's isomorphic relationships with other things, makes mathematically meaningful the idea that when we consider something, we are always already pulling it out from one context and presenting it in another, and thus from this point of view, the thing in question (its identity) might as well be considered as being equivalent to the isomorphisms that allow us to change the object into something else and recover it later on to varying degrees of fidelity. As Wittgenstein said, "the meaning of words lies in their use" and it is important to note how taking identities as collections of isomorphisms is sometimes called "abuse of notation", which is common practice among working mathematicians, even if strictly speaking it is not allowed by standard set theoretic foundations. Just getting off the ground when Wittgenstein was concerned about mathematical foundations, there is a sense in which category theory will have begun to respond to some of his worries, through decades of eventual development.
Defending a position in the tradition of Cassirer, Rodin (2014) recently argues that thinking of category theory as a mere extension of the Erlangen program, and a subsequent step into formal abstraction, misses the point. Rodin wants to defend the importance of the intuition from the pure formalism of Hilbert and the structuralism of Dieudonné and the Bourbaki authors. 1 "The switch from the structuralist thinking in terms of invariance to the new categorical thinking in terms of covariance and contravariance (i.e., functoriality) signifies a decisive brake with the structuralist viewpoint . . . (Rodin 2014, p. 255)" He argues that category theory is not just another, yet more subtle, kind of structuralism, and that its true innovation comes with Lawvere's work on "functorial semantics" and his development of a new categorical "foundation" for mathematics (Lawvere 1963). I put "foundation" in scare quotes here, because the intuition changes from a theory that is "founded" or "grounded" in bags of dots and their one-to-one correspondences, to a top-down "aerial" perspective on mathematics. Category theory "overcomes" set theory; mathematics effectively swallows up its abstract set-theoretical foundations, and gains a bird's-eye view on them. Overcoming set-theoretical foundations means overcoming our usual recourse to thinking in terms of ultimate atomic entities (or substances) where existence bottoms out, because doing so once we reach this level of generality would restrict us to the category of "small categories", those where all arrows and objects form sets. Analogously, type theoretical foundations would restrict us to Cartesian closed categories. Rather, we have to work from the top down, and imagine a hypothetical category CAT of all categories as an intended model of [elementary theory] ET and then add to ET new axioms which distinguish CAT between other categories; then pick up from CAT an arbitrary object A (i.e., an arbitrary category) and finally specify A as a category by internal means of CAT (stipulating additional properties of CAT when needed). (Rodin 2014, p. 106) Rodin argues that this overcoming of set theory, which the Bourbaki authors had hoped to achieve through the structuralist program, was actually only achieved through a non-structural modification of our intuitions. He stresses that it was the taking of equalities for isomorphisms, rather than the taking of isomorphisms for equalities, that really allowed for Lawvere's big leap into the top-down view offered by the category of categories. For it allows us to make sense of some of the "similarity" we informally observe between different domains of mathematics, and indeed between mathematics and the empirical world or the psychological domain. Indeed, with Cassirer, Rodin wants to conceive of mathematics as a "part of physics", somewhere on the spectrum between the purely ideal and the empirical. Categories are not structures, he claims, they do not deal with invariances. Functors and their "natural equivalences" (transformations between functors that have a "dual", in the reverse direction) are not "invariants" in the old sense. The tendency to view functoriality as a generalization of invariance, for Rodin, is symptomatic of a "conceptual inertia" possibly preventing us from doing full justice to the discovery of functoriality, and its adjustment of intuition. The new epistemic criterion introduced in functorial thinking does not in his view reduce to the Platonic or structuralist criterion according to which only invariant features are epistemically significant, while all the variable features are accidental and irrelevant. Rather than just tracking invariances, the functor tracks ways in which things can be taken for other things, it maps out different modalities of the as-if, it develops a diagrammatical logic for modeling ways of selecting, indicating, and picking out, or grouping, fusing and gluing things together, and thus of "moving" from one universe of discourse or thought or practice to another. The "hollowing out of substances" is replaced with an intuition for something like a cartography of worlding, a mapping out of virtual transitions between contexts, between worlds, between subjective experiences or objective constructions. It seems to detrivialize the opposition between Leibniz's wishful realism and Kant's claustrophobic finitude: it provides conceptual, formal, and geometric tools that allow a finer analysis, as it were, of what is really going on in the transition between theory and practice, or between perception and cognition, than ever could the comparably much blunter metaphysical tools of process, time, and becoming. The functors are not trivial tansitions from one context to the other: they divide into covariant and contravariant, such that their adjunctions do not recover what we had hitherto come to conceive as strict identities. Passing from the left to the right and back again, does not necessarily ensure that we have recoverend the original entity, as is the case with the one-to-one correspondence. Rather, as in a game of chinese whisperers, each passage through the circuit changes the message. It suggests, furthermore, that there are no ultimate invariants at the bottom of the real, but rather axiomatizes the inherent incompletion and relativity of both substance and structure. It is, in this way, more "honest" about cognition: it takes as a given that whatever is right for me is right, that identities and equalities are always pragmatic articulations rather that pure ontological entities. Thus, Rodin argues that category theory's real intuitive leap beyond set theory actually makes it more concrete. Far from being "abstract nonsense", as it sometimes is accused, we might more accurately say that category theory is concrete nonsense: it makes the "nonsense" between regimes of intelligibility concrete. It is not that we can always treat isomorphisms as equalities, but rather that all equalities are always already only equivalent "up to isomorphism".
Isomorphism and Computability
[For Duns Scotus] the understanding . . . objectively apprehends actually distinct forms which yet, as such, together make up a single identical subject. ... Formal distinction is definitely a real distinction, expressing as it does the different layers of reality that form or constitute a being. . . . Real and yet not numerical, such is the status of formal distinction. (Deleuze [1968(Deleuze [ ] 1990 This history of ideas has been launched into new territories by the late Vladimir Voevodsky, and his influential introduction of univalent foundations. Voevodsky's project imports a category theoretic intuition, an influence he gained in his early reading of Grothendieck (1997) Esquisse d'un programme . . . (which was written in 1984 and circulated in the mathematical community long before its publication), but also combines it with a very Leibnizian quest for the mechanical verifiability of mathematical proofs.
Loosely, the proposed univalence axiom, (A = B) ∼ = (A ∼ = B), stipulates that identity is equivalent to equivalence (or isomorphic to isomorphism). It comes packaged as the centerpiece of a new program for mathematical foundations that axiomatizes the idea that mathematical objects derive their identity deferentially from higher level isomorphic (in this case homotopic) equivalences. Voevodsky's homotopy type theory builds a bridge between logic (computer science, dependent type systems) and geometry (topology), such that each logical type is equivalent up to unique isomorphism to a corresponding path in homotopic space, and is related to other types through nested cascades of type dependencies, described geometrically as a hierarchy of homotopic fibrations from one path to another. The gesture here can be thought of as "taming" the wild jungles of category theory and establishing a coded hierarchical order of inclusion more suitable for launching complex mathematical research programs, where it is necessary to keep track of the underlying logic and the higher-order symmetries at each step of the construction. If category theory swallowed up its set-theoretical foundations, where all mathematics were built up from the empty set, univalent foundations now spits them out again as higher-dimensional groupoids ("special case" categories, where every morphism is an isomorphism), and mathematics is then "built down" from a hierarchy of dependent types corresponding to more or less complex paths in homotopy space. Thus, in homotopy type theory, truth becomes a special case of logic, logic a special case of set theory, set theory a special case of category theory, category theory a special case of higher categories, and so on into the firmament.
Type theory first emerged as a somewhat ad hoc correction of arithmetic foundations to avoid the Russell Paradox, which Russell famously discovered in Frege's set-theoretic approach. After Church and Turing, where the paradox reappears as non-terminating programs and algorithms that do not halt on a specific result, it became obvious that a special "typed" form of computational logic was needed, a calculus designed to avoid these paradoxical, non-sensical, non-terminating programs. There is an aura around this historical development, which Voevodsky himself tried to dispel, that naturally demotes computability theory in comparison to "pure mathematics". For the intuition after Turing is that the oracular exteriorization Leibniz had imagined has been proven impossible. Avoiding uncomputable programs seemed to imply compromising the full power of mathematics, thus priming the intuition to think of computability as being more of an engineering problem than one of pure mathematics. From the point of view habituated to imagining all of mathematics as being reducible to ZFC and to vague intuitions about how sets of sets behave, it looks as though incompleteness implies that we can avoid non-terminating programs only by restricting ourselves to a small region of "computable" mathematics, an impure mathematics.
This computable, applied, subset of pure mathematics has been found to have a deep connection with intuitionistic logic. In intuitionistic logic (originating in the work of Brouwer, Heyting, and Kolmogorov), a proposition can be true, in which case its truth can be presented in the form of a proof, or absurd, which just means it is "empty" of proofs, it has no terms. Truth is just the condition of having proofs, the idea being that a thing is the collection of ways it can be constructed or presented, and a thing that does not exist, cannot be presented. The Curry-Howard correspondence formalizes how this idea applies to both logic (computer science) and mathematics. Propositions are types. Computable programs are proofs. As Brouwer argued, contra Hilbert, the truth of a proposition is not ensured by showing that its being false would lead to a contradiction: we must be able to construct a positive proof, we must realize it, make it manifest, rather than appeal to its absurd "opposite". Thus, an arrow is drawn from the absurd to the unit type. An asymmetry is written into the logic of computability such that the entire apparatus descends from an original insistence that the paths compose and that the functions terminate, that the constructions be realized, or actualized. It is this idea that homotopy type theory expands on: in addition to having this dual interpretation of types in terms of a family of programs in computer science or formal proofs in mathematics, it provides a third, geometric interpretation. Each "computable function" can be modeled topologically as a continuous path in space. We have hence returned here to Duns Scotus's use of the concept of virtuality. Si esse depends on quid est: a thing exists, or is "true", if and only if it has demonstrable properties, or exemplifying attributes. Univocity was always a question of the structural correspondences between types, terms, and instances, and the virtual was a primitive notion of structure preserving map between objects and their attributes, subjects and their predicates, potential operations and their provisional actualizations.
Recall that part of Wittgenstein's unease had to do with that curious gap between formal theory and the real-world practice or application. In what way do the rules of the deductive system relate to a possible real-world event, quantity, quality, or behavior? There is no "equal sign" between the material goings-on (say, the machine's gears crunching along) and the symbolic formalization of an operational rule. Voevodsky is concerned with a similar gap. In first-order logic, all the special characters are ultimately forced into a relation with some natural language equivalent, which we are supposed to intuitively grasp. We say that ∀ means "for all". We say that ∃ means "there exists". But the meaning of "means" in these cases is outside of the deductive system in question. The problem is that this invisible equal sign between theory and practice seems to only exist in practice, since no theoretical proxy ever measures up to the empirical. 2 As Wittgenstein notes, the sentence "'We can construct a pentagon' is a proposition of physics. It is not a mathematical proposition but an experiential one." (Wittgenstein et al. [1939] 1976, p. 49) Voevodsky reasons similarly. In a documented discussion following his 2010 IAS lecture "What if Current Foundations of Mathematics are Inconsistent?", Voevodsky admits to sharing this conviction: "There can be no inconsistency in experimental science. There can only be the result of an experiment . . . I definitely consider material reality as the absolute judge of truth . . . " (Voevodsky 2010) A scientific experiment always gives a result, a yes or a no response, as Wheeler (1999) Hence Voevodsky's move. What if instead of assuming that mathematics is incomplete in virtue of the fact that we just know that it is consistent, we rather submitted ourselves to the possibility that it is our current intuitive understanding of foundations that is inconsistent? Perhaps we do not "just know" that first-order arithmetic is consistent, perhaps we do not "just know" how infinite hierarchies of sets behave. Instead, let us replace this assumption with the functorial intuition, the intuition of what it means to take something for another, to consider something "for our current purposes", or "up to isomorphism". In order to transform what appears to be just a restricted computable subset of mathematics (the proof-checkable part) into a foundation for all of mathematics, thereby bringing pure math closer to applied math, Voevodsky jettisons the distinction between strict equality and weak equivalence. Univalence means that isomorphic structures can be formally identified, not just "in practice" or by "abuse of notation". Univalence "is about expanding the notion of identity so as to coincide with the (unchanged) notion of equivalence." (The Univalent Foundations Program 2013, p. 5) Turning mathematical foundations inside out in this way means admitting within mathematics the relativity of even the most clear and distinct logical deduction.
Operational Virtuality
If mathematics must ultimately be confronted to the real, to empirical becoming, if it must make a difference in this world rather than in some other possible world, does this not imply that the so called "universal truths" of mathematics are just as temporary as the flow of experience? This Voevodsky tacitly admits: Mathematics has been historically kind of static. If something has been proved it has been proved forever. One can speculate about the possibility of a kind of "dynamic" mathematics in that sense. It is very hard to imagine at this point . . . (Voevodsky 2010) The admission that math may be just as impermanent as experimental truth evokes an openness to a kind of dialectical materialism, no doubt part of Voevodsky's education. But it could equally be said to echo American pragmatism, where, in William James' infamous formulation, truth is understood as the "cash value" of a proposition: something is true if it can effectively be cashed in and make a real practical difference. For Bergson, of course, the progressive geometrization of reality could only serve to mask the truth of becoming.
This natural mathematics is only the rigid unconscious skeleton beneath our conscious supple habit of linking the same causes to the same effects; and the usual object of this habit is to guide actions inspired by intentions, or, what comes to the same, to direct movements combined with a view to reproducing a pattern. Mitchell [1911] 1944, pp. 50-51) Logical and geometric concepts only created an illusory "intelligible world", Bergson thought. They "are not, indeed, the perception itself of things, but the representation of the act by which the intellect is fixed on them." Mitchell [1911] 1944, p. 177) The task of the philosopher, he thought, in the face of the geomtrization of the world, was to uphold the distinction between "real and symbolic" Dingle [1822] 1965, p. 153). So Bergson ends up defending a form of equivocity: the virtual is here a means to protect an absolute difference between the created and the uncreated, the real and the imaginary, the real physicist making a measurement and the hypothetical observer in an imagined spatio-temporal frame of reference. But indeed, if Einstein's view of time won out over Bergson's, it is only because it could be cashed in for real effects and predictions, it virtually included pragmatic constructions, it "made a difference". It established, in other words, a virtual link between the model and its external verification in practice. Practice, and its deferential character-its Heideggerian character, let's say, the preorder of tool-being, where each task delivers itself to the next through an endless chain-is the only possible site for virtuality: all things derive their identity from potential operations, through a hierarchy of transcendental constraints. Virtuality in this way is the yet unactualized future of realizable practice, ensuring that each actualization is an expression of the virtual, that each subjective experience is an element of the transcendental constraints on cognition, and that things are nothing more nor less than the virtual operations that allow their constructions. Univalence echoes univocity: if something exists, it virtually contains all the ways we may come to know it. A thing is equivalent to the operations required for actualizing it. Thus, if a thing is realized, or actualized, it unfolds a proof of existence from an equivalence class of virtual operations.
It is tempting to speculate that with such gestures, such admissions, mathematics comes close to appeasing some of Wittgenstein's worries, and perhaps even some of Bergson's objections. The synthetic process in mathematics and logic in the last century has begun to build a very subtle bridge between the world of intuitions and the world of exteriorized mechanical provability, and between the worlds of formal theory and practice. This bridge, it seems to me, is achieved through a most interesting compromise. Math has admitted into its formalism the kind of relativism with which equivalences are trafficked in everyday practice, and written this relativity into law, as a fundamental rule. It is by admitting that whenever we pick something out from the rest, whenever we select or point to something, there is always already something like an invisible equal sign, between the thing we are talking about and our action of presenting it, between our intuitions and our hypomnemonic exteriorizations. There is a kind of honesty in this, an admission that what is right for me is right, and that truth or identity or equality is always a matter of context. But the upshot is that we learn that this does not necessarily mean, as Wittgenstein thought, that "here we can't talk about 'right'". For indeed, this admission is precisely what allows us to recuperate rigorous ways of addressing validity: we can legitimately "talk about right" up to isomorphism.
Would Bergson, for his part, have acquiesced to such an operational account of virtuality? Or would he have fought it on the grounds that it was too bound up with methods of effective realization, thereby immediately, in his view, cutting us off from an irreducible essence and blinding us to the truth of becoming? Will we not have begun, however, to recover something of his vision? Cassirer rightly notes that even in Bergson, "a spatial intuition and schema seem to have slipped unnoticed into his analysis of time" (Cassirer [1923(Cassirer [ ] 1985. His cones, shells, and sheaves now reappear as fibrations, functors, natural transformations, and ever more rigorous diagrammatic tools for modeling functional relationships between perceptions and conceptions of the world at different scales and levels of description. Time, process, and becoming are detrivialized geometrically through a rigorous demystification of the cone, and the rules of logic and mathematics are here no longer some rigid grid the intellect imposes on reality, so much as continual compromises between the virtual and the actual, between the intuition and the exteriorization. But contra Bergson, this progress was achieved not by defending an absolute difference in kind between the map and the territory, or the relations and the relata, but by dropping this equivocal criterion altogether. It is achieved not simply by dogmatically rejecting the dualist essentialism, but through a careful, patient, and always explicitly provisional "asymptotic" monism, a unificatory and conciliatory attitude, rather than a reactionary empirical prejudice. We see now that the opposition of continuity was nothing more than the preservation of structure through transformation, and that it is therefore not opposed to logical ratiocination. Life is not the ancient enemy of matter. The virtual is not opposed to mechanism. And the contraction of experience is a structure-preserving operation, quite possibly owing to an evolutionarily conditioned propensity to link the before and the after, to find a terminus for our actions, and to respect the rules of perpetual composability. Ex nihilo nihil fit: the same gesture that establishes intelligibility and coherence, also prohibits the summoning of the void. And continuity is the preservation of structure from the initial to the terminal state, which is precisely why a computable function is executable. The non-terminating program, for its part, is the site of a catastrophe, where the coherence is broken, the continuity is cut, and Ex falso quodlibet, an inevitable leap from absurdity to fixity, as a new paradigm or category or world is reified. Continuity is (re)established, it is a struggle that can only take the form of a demand for perpetual composability, consequence of our submission to the frictions of the real. Each new expression comes ready built with its deferral to the next. There is no continuity, no coherence, no common sense, without this process of turning the intuitions inside out through exteriorization, subjecting the doxa to the trials of the empirical. The drama of the actual and virtual could just be a by-product, a side effect of the most general constraints biology imposes on living things: the paths must compose, the structures must be preserved, all actions must result in a change, and make a difference in practice. All this in the absence of certitude: we do not know what our bodies can do, what our systems are capable of, the world always only appearing in hindsight. Matter and causality, objective stability and intelligibility are indeed grasped through desperate constructions, contingent and ad hoc defaults to provisional actualities. But the real itself is unfinished. It perhaps exists only in the making, that is, in whatever survives the transition between actualities, between models of objectivity, between scientific paradigms, but also, importantly, between individual prejudices or biased perspectives. | 13,545 | 2021-02-09T00:00:00.000 | [
"Philosophy"
] |
Structural Model of the Bilitranslocase Transmembrane Domain Supported by NMR and FRET Data
We present a 3D model of the four transmembrane (TM) helical regions of bilitranslocase (BTL), a structurally uncharacterized protein that transports organic anions across the cell membrane. The model was computed by considering helix-helix interactions as primary constraints, using Monte Carlo simulations. The interactions between the TM2 and TM3 segments have been confirmed by Förster resonance energy transfer (FRET) spectroscopy and nuclear magnetic resonance (NMR) spectroscopy, increasing our confidence in the model. Several insights into the BTL transport mechanism were obtained by analyzing the model. For example, the observed cis-trans Leu-Pro peptide bond isomerization in the TM3 fragment may indicate a key conformational change during anion transport by BTL. Our structural model of BTL may facilitate further studies, including drug discovery.
Introduction
Understanding the molecular mechanisms underlying the passage of ions and small molecules through biological membranes is a fundamental aspect of cell physiology. This knowledge is also crucial for analyzing disease associations, for identifying potential drug targets, and for improving safety and efficacy of new or existing drugs. Transmembrane proteins provide key means of molecular transport through the cell membrane. They are also extensively studied as potential drug targets. Of strong interest as potential drug targets are the organic anion transporter family proteins (OATPs), due to their capacity to serve as tumor biomarkers and effective cancer drug transporters [1,2]. The physiological expression patterns of the OATPs are altered in malignant tissues. Screening tumors for OATP expression may enable an OATP-targeted therapy with higher efficacy and, most importantly, decreased side effects relative to current therapies. In addition to offering new opportunities, the membrane transport proteins also pose challenges to drug discovery research. To overcome the challenges and difficulty of determining membrane protein structures, several methodologies for studying membrane proteins have been reported, including specialized techniques for stabilizing and manipulating proteins, which depend on the protein itself and the method planned-X-ray crystallography, NMR spectroscopy or other techniques to explore protein structure [3].
Motivated by an interest in druggability of anion transporters, we have identified the transmembrane protein bilitranslocase [4] (BTL) (UniProt O88750, TCDB 2.A.65) as a potential drug target that exhibits partial functional similarity to OATPs [5]. BTL is a plasma membrane transporter involved in the transport of organic anions, including the transport of bilirubin through the liver plasma membrane [6]. BTL may play an important role in both human pathology [7] and drug delivery [8]. The primary structure and biological functions of BTL have been known and studied for decades [4][5][6][7][8][9][10][11][12][13][14]. However, the secondary and tertiary structures of the BTL are not known. No sequence homologs have been detected for BTL, however, it has been predicted, based on homology-independent considerations, that BTL has four TM α-helices [15]. In addition, NMR spectroscopy has validated the α-helical structures of two key transmembrane helices, TM2 and TM3, in the SDS media [16,17]. H-bonds were suggested to play a role in the transport mechanism, based on chemometrics modeling studies of substrates with experimentally determined affinities [8,11]. Despite these characterizations, a full understanding of the BTL transport mechanism is hindered by the lack of its complete atomic structure(s). Although BTL may exists in a multimeric form [10], the complete oligomeric state of the protein is not clearly known, thus making it further difficult to understand the complete structural assembly of BTL.
The structure of a membrane protein can be characterized by a variety of approaches. If good quality crystals can be obtained, X-ray crystallography provides an accurate atomic structure [18]. NMR spectroscopy can be used to map the flexible segments and their conformational dynamics [19], including the cis-trans heterogeneity [20]. However, the 3D structure determination of membrane proteins by X-ray crystallography or NMR spectroscopy remains challenging, for a number of reasons [18]. Besides taking the advantage of the emerging experimental technologies targeted towards membrane protein structural biology [21], computational analysis, with extrapolation of sparse experimental information, is a valuable complementary approach in expanding our knowledge of transmembrane protein structures [22]. It includes sequence dependent predictions of transmembrane regions, their stability and interactions, as well as molecular dynamics simulations, coarse-grain simulations and other stochastic methods.
Here, we aimed to structurally characterize the transmembrane domains of BTL. Coarsegrained models of the assembly of the four transmembrane α-helices, TM1, TM2, TM3, and TM4, of BTL were generated using Monte Carlo (MC) simulations, a stochastic simulation process useful for systems with several coupled degrees of freedom, considering predicted transmembrane helix-helix interactions and several other restraints. The generated conformations were clustered and ranked based on the Discrete Optimized Protein Energy (DOPE) statistical potential for inter-atomic distances in proteins [23]. The top scoring models were analyzed to propose the packing of the four BTL transmembrane helices by structural modeling relying on experimental constraints.
To validate the proposed model, we used nuclear magnetic resonance (NMR) and Förster resonance energy transfer (FRET) spectroscopies to map interactions between TM2 and TM3, the two transmembrane segments that participate in the transport of anions through BTL [9,16,17]. Structural analysis of the TM2:TM3 pair in the SDS-d 25 micelles was performed based on the previously collected NMR data [16,17]. The existence of the TM2:TM3 pair in the SDS micelles was demonstrated by the FRET experiments. To increase the quality of our experimental data sets, we introduced two 15 N-labeled alanines ( 15 N-Ala) into the TM2 and TM3 segments. The selective 15 N-Ala labeling allowed us to observe the dynamics of TM2, TM3, and the TM2:TM3 pair in the SDS micelles. The occurrence of a cis-trans peptide bond isomerization was detected in the TM3 fragment at the Leu230-Pro231 peptide bond. This isomerization may be key to the uptake mechanism of different anions via the BTL.
Results and Discussion
Transmembrane helix-helix interactions The four TM α-helices (TM1: Phe24-Asp48, TM2: Phe75-Cys94, TM3: Gly220-Tyr238, TM4: Pro254-Ser276) have been predicted by a neural network model based on homology-independent considerations [15]. The final transmembrane region boundaries were predicted based on statistically derived amino acid preference data for the transmembrane region boundary positions. Although charged residues are present at the boundaries, our previous studies have shown that these boundary residues are contained within the lipid bilayer during the MD simulations, and do not show any translational motion along y-axis [16,17]. These results were confirmed by NMR spectroscopy [16,17]. The transmembrane helix-helix interactions were predicted taking into consideration the correct topology of BTL. This prediction is independent of the previously determined structures of TM2 and TM3.
The four transmembrane helices of BTL can in principle form six combinations of helixhelix pairs. Each transmembrane helix was represented as a rigid body. Models that optimize pairwise helix-helix interactions were enumerated with the open-source Integrative Modeling Platform (IMP) package (http://integrativemodeling.org) [24]. All configurations of all combinations of rotational, translational and tilting degrees of freedom, within specified ranges and at specified resolution that were guided by previous statistical analyses of transmembrane protein structures were considered. The relative stability of each helix-helix configuration was estimated using a scoring function based on known interaction data on transmembrane helices. The BTL transmembrane helix pairs TM2-TM3 and TM1-TM4 that showed the most optimized configurations, as denoted by their lowest scores, were reported to be interacting.
Incidentally, the two interacting transmembrane regions TM2 and TM3 contain the AxxxG and GxxxxxxA sequences, respectively. The GxxxG and GxxxxxxG motifs are known transmembrane helix dimerization motifs where Gly can be replaced by other small amino acid residues, such as Ala or Ser [26]; thus, they may include the sequences in TM2 and TM3. The motifs are further associated with β-branched residues Val90, Ile228, Ile232, and Ile234. The βbranched amino acid residues isoleucine and valine are hypothesized to reduce the entropic cost of transmembrane protein folding with their constrained rotameric freedom in the helical conformation [27]. The presence of these transmembrane helix dimerization motifs in association with the β-branched amino acid residues in TM2 and TM3 provides additional validation for the predicted TM2-TM3 transmembrane helix-helix interaction pair.
Predicted Arrangements of the Four Transmembrane Regions of BTL
Monte Carlo simulations were used to sample the accessible conformational space and predict the probable assembly of the four transmembrane regions of BTL. The diameter of the assembly was restrained to~26 Å. The tilt and depth (translation along z-axis) restraints were calculated based on the length of the transmembrane helices. Other restraints applied include DOPE, excluded volume, packing, and distance restraints corresponding to the predicted TM2-TM3 and TM1-TM4 transmembrane helix-helix interactions. The restraints are based on previous analyses of known transmembrane protein structures [28]. Besides the rotational and tilting movements, each individual transmembrane helix was allowed only certain translations. Accordingly, for TM1 the center of mass was fixed and the only allowed translation was along the z-axis. For TM2, both y-and z-axis translations were allowed. For TM3 and TM4, translations along all three axes were allowed.
Two million conformations of the four BTL transmembrane regions were generated by the Monte Carlo method. These conformations were then clustered by hierarchical clustering based on their pairwise Calpha RMSD differences. The threshold of 2 Å on the RMSD cutoff resulted in 3520 clusters; the centroids of these clusters were selected as the representative conformations.
The BBQ algorithm was used to reconstruct the atomistic backbones of the cluster representations [29]. The side chains were added using the SCWRL4 algorithm [30]. The representative all-atom structures were then ranked by their DOPE scores developed for transmembrane proteins (G.-Q. Dong and M. Bonomi, unpublished) and further analyzed to infer how the four transmembrane regions of BTL are arranged.
There are six possible unique arrangements of the four transmembrane regions of BTL as observed from the extracellular surface ( Fig 1A). The transmembrane regions TM1, TM2, TM3, and TM4 are denoted as A, B, C, and D, respectively. Positioning TM1 at the top-left corner and reading clockwise, the six arrangement types are ABCD, ADBC, ACDB, ABDC, ACBD, and ADCB. The 3520 representative conformations from the Monte Carlo simulation were classified into these six groups ( Table 1). The most frequently observed arrangement is ABDC (Top2, 3, 4 conformations in Fig 1B). Arrangement types ACBD and ADCB (Top5 and Top1 conformations, respectively, in Fig 1B) are also frequented. The arrangements of the amino acid residues around the helical axes ( Fig 1C) support the predicted packaging of the transmembrane helices.
We scored the 3520 all-atom representations using the DOPE statistical potential for transmembrane proteins. The distribution of the 100 top-scoring conformations among the arrangement types is similar to that of all 3520 conformations (Table 1). Forty-four out of the 100 top-scoring conformations are ABDC. This arrangement has the interacting transmembrane helices positioned diagonally opposite to each other (Top2, 3, 4 in Fig 1B). The highest scoring conformation exhibits the ADCB arrangement ( Fig 1B). The fifth ranked conformation is ACBD. Both the ADCB and ACBD arrangements have the interacting helices positioned adjacent to each other. These three-arrangement types together account for 86 of the 100 highest-scoring conformations and 84.4% of all 3520 conformations. In conclusion, our analysis helped to narrow down the probable arrangements of the four transmembrane regions of monomeric BTL. However, it has not resulted in a precise model of the functional transport channel. Keeping in mind that there are some experimental evidences about a possibility of dimeric or trimeric form of BTL [10], further experimental investigations will be needed. Distance between the Transmembrane Regions TM2 and TM3 The TM2 transmembrane region lies immediately adjacent to the conserved bilirubin-binding motif. TM3, on other hand, is adjacent to a second bilirubin-binding site and also contains a conserved ligand-binding motif at the C-terminal. Therefore, the transmembrane regions TM2 and TM3 are postulated to play key roles in the formation of the transport channel, ligand binding, and mediation. 9,12 Previous experiments with cysteine and arginine modifications had concluded that BTL exists in two metastable forms with different substrate affinities [10]. This metastable nature of BTL, detected experimentally [10], can possibly be accounted to the presence of proline-induced kinks, which may render flexibility to both transmembrane regions. The Pro85 in TM2 and Pro231 in TM3 are located in the middle of the transmembrane channel and presumably define the pore in variable functional states. Therefore, we analyzed the distances between the proline residues and the N-termini of TM2 and TM3, as follows.
The most populated arrangement ABDC has the transmembrane regions TM2 and TM3 arranged diagonally opposite to each other. The average distance between Pro85 in TM2 and Pro231 in TM3 is 19 ± 4.3 Å for the 44 top-scoring ABDC conformations, and 17.4 ± 4.35 Å for all 1330 ABDC conformations (Fig 2A and 2C). The proline-proline distance in 36% of the conformations with the ABDC arrangement is between 16 and 20 Å. In ADCB and ACBD, the average proline-proline distance is 14 ± 3.6 Å and 13.1 ± 3.4 Å, respectively. In these two arrangements, the TM2 and TM3 transmembrane regions are adjacent to each other. The distance between the N-termini of the two transmembrane segments is 30.3 ± 4.1 Å for all three arrangement types (Fig 2B and 2D). In ABDC, this distance represents the diagonal in the complete assembly of the four transmembrane regions.
Identification of TM2:TM3 Pair in SDS Micelles with FRET Technique
The integration of TM2:TM3 pair in the SDS detergent was confirmed by the Förster Resonance Energy Transfer (FRET) experiment. The energy transfer from an excited state of a donor fluorophore to an acceptor fluorophore should occur typically when they are in the proximity (<10nm). Atto488 and Atto594 were used as FRET dye donor-acceptor pair to label the TM2 and TM3 peptide fragments. Based on the long-range dipole-dipole coupling mechanism, the FRET signal was recorded for the surfactant (SDS) with dye-labeled TM2 or TM3 peptides. This signal was much stronger than that recorded for the dye-free sample, thus establishing the proximity between the two fragments TM2 and TM3 in a single SDS micelle (Fig 3). Moreover, the collected data demonstrate the presence of both types of pairs (TM2:TM2 and
CD Spectroscopy of TM2:TM3 Pair in SDS Micelle Media
The CD spectra recorded for isolated TM2 and TM3 fragments and for the TM2:TM3 pair in SDS and DPC surfactants exhibit a substantial amount of α-helical conformation characterized by two minima near 208 and 222 nm (Fig A (left) in S1 File) [32]. A quantitative analysis suggests that in zwitterionic (DPC) environment the fraction of time and/or residues of TM2 and TM3 in the α-helical conformation is 15% and 40%, respectively (Fig A (left) in S1 File). Inspection of CD data reveals that, similar to previously reported data for megainin peptides [33], preference of α-helical conformation of TM2 and TM3 segments are different in anionic (SDS) and zwitterionic (DPC) media, reflecting the importance of electrostatic and hydrophobic interactions between the peptides and micelles. For the TM3 fragment, the helical conformation is favored in the anionic media, whereas the TM2 segment prefers such form in the zwitterionic micelle. The conformational behavior obtained for TM2:TM3 pair is determined by the TM2 fragment rather than the TM3 one in all type of used media (Fig A (right) in S1 File).
Structural Analysis of BTL TM2:TM3 Pair in SDS-d 25 Micelle by NMR Spectroscopy
Structural details of the two key BTL transmembrane regions, TM2 and TM3, in SDS-d 25 micelle were recently analyzed on the basis of homonuclear and heteronuclear 2D NMR data sets [16,17]. The high-resolution 3D structures were solved based on 250 (139 intraresidual, 78 sequential, and 33 medium range) and 180 (107 intraresidual, 58 sequential, and 15 medium range) nontrivial 1 H-1 H NOE distance constraints obtained from 2D NOESY spectra analysis of TM2 and TM3, respectively. 3D structures as obtained from our previous studies [16,17] were positioned in the SDS lipid media and subjected to MD simulations in water bath with AMBER 9 molecular dynamics package (Fig 4) [34].
To detect the intermolecular NOESY contacts between the TM2 and TM3 transmembrane fragments, which are critical for structural analysis, two 15 N-labeled alanines were incorporated in each of the studied peptides. The alanines Ala80 and Ala88 in TM2 and Ala225 and Ala233 in the TM3 fragment were labeled. Unfortunately, the 3D 15 N-edited NOESY-HSQC spectra acquired with mixing times up to 250 ms did not exhibit any intermolecular signals due to the fact that TM2 and TM3 segments are separated by at least 13 Å (Fig B in S1 File), which is in line with the results obtained from the computational analysis of transmembrane region assembly. Finally, the 3D structure of the TM2:TM3 dimer in SDS micelles was evaluated based on previously determined experimental constraints for TM2 [17] and TM3 [16] fragments (Fig 4).
An overlay of 2D 1 H-15 N HSQC spectra acquired for TM2 and TM3 fragments shows that the chemical shifts 1 H and 15 N of amide groups for 15 N-labeled alanines are in agreement with previously recorded data sets acquired on natural abundance of the 15 N isotope [16,17]. Furthermore, the 1 H-15 N HSQC spectra of the TM2:TM3 pair are presented as a superposition of those obtained with individual TM2 and TM3 segments (Fig 5, Fig C (left, right) in S1 File). Additional peaks corresponding to a less populated conformation were detected for the TM3 species (Fig 5A). Inspection of 2D 1 H-1 H NOESY spectra acquired for the TM2:TM3 pair reveals a weak signal, which could be assigned as Leu230 1 H α -Pro231 1 H α of the cis rotamer of the Leu230-Pro231 bond in the TM3 peptide (Fig C (bottom) in S1 File). We conclude that the second conformation appears due to the cis-trans isomerization of the peptide bond around the central Pro231 in TM3 fragment. The relative fraction of the second conformation was estimated to be 30% based on the ratio of peaks. Interestingly, this ratio did not change during the entire time of NMR observation, which suggests the presence of a high energy barrier between the two different conformers. Indeed, BTL is found to be present in two meta-stable conformers with different substrate affinities and the conversion between the conformers can be accelerated by presence BTL substrates, including bilirubin [10]. On the other hand, presence of a minor cis conformation was not detected for the central Ser84-Pro85 peptide bond in the TM2 segment.
The position of the TM2:TM3 pair in SDS micelles was analyzed based on the 25 ns trajectory simulated in AMBER 9 with the parm99 force field and the TAV protocol (S1 File). As follows from the solved 3D structure of TM2:TM3 pair, the TM3 peptide takes a more central position inside the micelle, positioning itself in a more hydrophobic environment in comparison with previously published data [16]. At the same time, the TM2 fragment moves close to the micelle surface with the two α-helical parts buried in the hydrophobic region of the SDS micelle and the central loop around Pro85 is situated in the hydrophilic surface (Fig 4). The translational diffusion coefficient (D tr ) was determined with a DPFGDSTE (Double Pulsed Field Gradient Double Stimulated Echo) experiment. On the basis of this experimentally determined D tr , we estimated the hydrodynamic radius (R h ) for all studied systems (Fig D, E, and Table B in S1 File). Since the bilirubin-binding motif (65-75) is close to (and part of) the transmembrane segment, interrupted by Pro85, it could be possible that binding of bilirubin at the extracellular surface of the protein triggers a conformational change of TM2 around this Pro kink. In fact, bilirubin and nicotinic acid are positive allosteric effectors, inducing the BTL into its high-affinity state [10]; in turn, this could be a driving factor for substrate transport. Flexibility around this Pro kink could also be triggered by cysteine reagents, which are certainly doi:10.1371/journal.pone.0135455.g005 exposed to the medium (unpublished data) and induce BTL to its low-affinity conformation [10].
Molecular Mobility of the TM2 and TM3 Fragments in SDS Micelles
The incorporation of two 15 N-labeled alanines in the TM2 and TM3 fragments enabled the exploration of the dynamics of the peptide backbone in a residue specific manner by measuring three relaxation parameters that characterize the backbone 15 N nuclei. Longitudinal (R 1 ) and transverse (R 2 ) relaxation rates were extracted with high accuracy from experimental data (Figs F-H in S1 File). The obtained R 1 and R 2 values together with steady-state { 1 H}-15 N NOEs (Fig H in S1 File) allowed us to apply the isotropic rotation model in the model-free approach [35].
Experimental magnitude of R 1 relaxation rates falls in region 1.3-1.5 (s -1 ) for all studied species (Fig I in S1 File). Estimated values of R 2 relaxation rates are comparable in case of three out of four alanines examined in TM2 and TM3 segments. The Ala225 in the TM3 fragment clearly shows a decreased relaxation rate R 2 , which dropped to 5 s -1 when compared with 8.2 s -1 detected for Ala233 (Table C and Fig I in S1 File). Finally, the { 1 H}-15 N NOEs, evaluated with relatively lower accuracy, reveal moderately higher values for TM2 (0.65-0.55) than for TM2 peptide (0.6-0.45) ( Table C and Fig I in S1 File). The relaxation data, acquired for the TM2:TM3 pair in SDS micelle, are similar to those obtained for the separate TM2 and TM3 fragments. The measured { 1 H}-15 N NOE values for Ala80 and Ala233 in the TM2:TM3 pair could suggest more restrictive motions on the ps-ns time scale compared to the individual components.
The parameters of backbone dynamics (S 2 , τ m ) were calculated for 15 N-labeled alanines with model-free approach taking into account an isotropic diffusion model. The values of overall correlation times (τ R ) were extracted from R 1 /R 2 ratios (Fig J in S1 File); the values of 4.99 ± 0.05, 6.97 ± 0.08, and 6.87 ± 0.03 ns were obtained for TM2, TM3, and TM2:TM3 peptides, respectively. The model-free analysis clearly demonstrated more stable structure for TM2 peptide. The S 2 values obtained for TM2 individual fragment in SDS surfactant were 0.98 and 0.86 for Ala80 and Ala88, respectively. In case of TM2:TM3 assembly the S 2 decreased to 0.84 and 0.70 for Ala80 and Ala88, respectively, but still remained in the range characteristic for folded proteins (Fig J in S1 File). The TM3 segment exhibited lower values of S 2 parameter, namely 0.51 and 0.78 for Ala225 and Ala233, respectively. In the TM2:TM3 pair similar values of S 2 for both 15 N-labeled alanines (Fig J in S1 File) were obtained. The relaxation time analysis demonstrated that the dynamic processes in TM3 fragment were substantially different from those in TM2. Intensive high frequency motions in ns-ps range were observed in TM3 peptide ascribed to Ala225 at the N-terminal part, which were detected neither in TM2 nor in the Cterminal part of TM3 in SDS micelle.
Concluding Remarks
In this work, we computationally analyzed the assembly of the four α-helical transmembrane regions of BTL, restricted by the predicted transmembrane helix-helix interactions TM2-TM3 and TM1-TM4. Of the six possible ways in which the transmembrane regions could be arranged, we have identified the three most probable arrangement types using Monte Carlo simulations. The most observed arrangement has the key transmembrane segments, TM2 and TM3, positioned diagonally opposite to each other. The distances between these two transmembrane regions were analyzed and are supported by NMR spectroscopy results.
Furthermore, the structure of the TM2:TM3 pair is analyzed in more detail using several experimental methods. The existence of the TM2:TM3 pair in an ionic SDS micellar environment and interaction between them was supported by results from NMR spectroscopy and FRET efficiency. Additionally, the 40 ns molecular dynamics simulations indicated the existence and stability of the TM2:TM3 pair in SDS micelle.
The kinks observed in the two key BTL transmembrane segments at the positions of the two central prolines (Pro85 in TM2 and Pro231 in TM3) could be one of the important structural aspects explaining the mechanism of transport of different anions by BTL. We observed two TM3 conformers in SDS solution, differing in the cis and trans rotamer of the Leu230-Pro231 peptide bond. Therefore, we can conclude that the proline kink in TM3 renders the flexibility to the transmembrane region. The transition between these two states could facilitate the molecular uptake process. This hypothesis is further supported by the strong conformational exchange processes at the N-terminal part of TM3, appearing probably due to the rotation around the Leu230-Pro231 peptide bond. This rotation could lead to the changes within the TM2:TM3 pair or even in the whole four-helix bundle, in turn leading to the transition between the two distinct functional states, as observed in the native membrane environment [10].
A comparison of the 3D structures of TM2:TM3 pair in SDS micelles with previously solved structures of individual fragments demonstrates remarkable changes in the position of peptides in the micelle. TM3 in the TM2:TM3 pair is likely positioned more deeply in the hydrophobic part of the SDS micelle than in a sample of TM3 alone [16]. The detailed structure of an individual α-helix likely depends on the neighboring helices; additional experimental data is needed to confirm the structure of this four-helix bundle.
To understand the arrangement of the BTL transmembrane helices and the structure of the transport channel at atomic level, it is essential to have knowledge of the exact oligomeric state of the protein, which is still lacking. Here, we made key steps towards achieving this goal. We suggested the probable assembly of the transmembrane regions of BTL. We also characterized the structure of the two functionally important transmembrane regions, TM2 and TM3, which in turn led us to hypothesize about conformational changes involved in the transport of ligands through the channel.
Synthetic TM2 and TM3 Peptides
Synthetic peptides TM2B and TM3 (Ser73 -Leu99 and Gly220 -Tyr238, respectively), corresponding to the two key transmembrane segments of BTL, were purchased from CASLO Laboratory, Denmark (www.caslo.com) and were subjected to NMR and FRET experiments. Both peptides were synthesized as lyophilized trifluoroacetate salts. To avoid problems during synthesis, purification and NMR sample preparation, the extreme hydrophobicity of the peptides was moderated by adding four lysine residues (a LysTag KKKK) at the C-termini. The amino acid sequences of synthetic TM2B and TM3 are 73 SSFCLFVATLQSPFSAGVSGLCKA ILL 99 KKKK and 220 GSVQCAGLISLPIAIEFT 238 KKKK, respectively, with purity higher than 93.8%. To measure the relaxation parameters with heteronuclear NMR spectroscopy, the alanines of both TM2B and TM3 fragments were 15 N-labeled ( 15 N-Ala80, 15 N-Ala88 15 N-Ala225, and 15 N-Ala233).
Predicting Transmembrane Helix-Helix Interactions
The interactions between transmembrane helix-helix pairs in bilitranslocase were predicted using the open-source IMP program (http://integrativemodeling.org) [24] as well as the TMhit web server [25].
The IMP predictions take into account the complete transmembrane regions and not the individual residues. In this case, the BTL transmembrane region sequences with defined topologies served as the input. The rigid body representation of each transmembrane region was generated considering DOPE [23], excluded volume and packing restraints. The DOPE statistical potential for transmembrane proteins was developed and used internally at the Sali-Lab. These rigid body representations do not take into consideration the structures determined by either NMR or MD methods in our previous studies. Because transmembrane helices can interact in multiple ways [36], all possible transmembrane helix-helix pairs of BTL were considered and their conformations were optimized. Only the best scoring transmembrane pairs were regarded as interacting and considered for further analysis.
The TMhit algorithm [25] was used to predict the transmembrane helix-helix interactions based on residue contacts. It incorporates contact propensities, physiochemical and structural information to predict contact residues and their pairing relationships or helix-helix interactions. Previously predicted transmembrane regions and their topologies served as the input for the algorithm.
Monte Carlo Sampling
The probable arrangements of the four BTL transmembrane regions were explored using MC sampling. The initial configurations of the four BTL transmembrane regions, considering only the CA atoms, were constructed based on the conformations generated from previous MD simulations [16,17]. The transmembrane region sequences, topologies and loop connectivity served as the input parameters. The primary restraints were defined by the predicted transmembrane helix-helix interactions and the filter on the crossing angles [37]. Added restrictions were applied on the transport channel diameter, the tilt and depth (translation along z-axis) of the transmembrane helices [28,36]. This discrete conformation space, defined by the applied restrains, was then sampled using the Monte Carlo method with varying temperatures.
NMR Spectroscopy and 3D Structure Evaluation
The translational movements of the TM2 and TM3 transmembrane fragments and TM2:TM3 pairs in SDS-d 25 micelles were characterized using DOSY technique with 64 gradients utilized on an Agilent VNMRS 600 NMR spectrometer equipped with a diffusion-specific probehead. The evaluation of the 3D structure of the TM2:TM3 pair in SDS micelle environment was performed based on 1 H-1 H NOESY constraints obtained from our previous studies. 16,17 Heteronuclear 1 H-15 N HSQC and 15 N-edited NOESY data sets were recorded for the TM2 and TM3 peptides containing 15 N-labeled alanines-Ala80, Ala88 (TM2) and Ala223, Ala231 (TM3). The 15 N relaxation measurements (R 1 , R 2 , and 1 H-15 N NOE) were conducted at 303 K on an 18.8 T magnetic field. All applied procedures are described in detail in the S1 File.
The 25 ns trajectory of molecular dynamic simulations in water bath using TAV protocol was applied to evaluate the 3D structure of TM2:TM3 pair in an SDS micelle. Simulations were performed in Amber 9 software using the param99 force field. Resulted structures were analyzed using the ptraj program included in the Amber software bundle.
Supporting Information S1 File. Detailed description of FRET, NMR and MD results. Binding parameters for transmembrane TM2 and TM3 segments in SDS micelle by FRET experiments (Table A). Hydrodynamic parameters by DOSY spectroscopy (Table B). 15 N relaxation rates for 15 N-labeled alanines in TM2, TM3 peptides and TM2:TM3 pairs extracted from NMR data ( Table C). CD spectra (Fig A). Distance between Pro85 and Pro231 in TM2 and TM3 fragments of BTL protein (Fig B). 1 H-15 N HSQC spectra (Fig C). FT PGSE analysis of DPFGDSTE experiment for SDS micelle, TM2, TM3, and TM2:TM3 mixture. (Fig D). Spatial structure of studied species from NMR data. (Fig E). Experimental R 1 and R 2 relaxation rates for 15 N-labeled alanines in TM2 fragment (Fig F). Experimental R 1 and R 2 relaxation rates for 15 N-labeled alanines (Ala225 and Ala233) in TM3 (Fig G). Experimental R 1 and R 2 relaxation rates for 15 N-labeled alanines (Ala225' and Ala233') in minor conformation of the TM3 (Fig H). Experimental values of 15 N R 1 , R 2 relaxation rates and { 1 H}-15 N NOE (Fig I). Results of analysis of 15 N relaxation data for TM2 and TM3 fragments (Fig J). (DOCX) | 7,191.8 | 2015-08-20T00:00:00.000 | [
"Biology",
"Chemistry",
"Materials Science"
] |
Borders and Beyond : Priests and Laity , Jews and Christians
As Pope Benedict has proclaimed 2010 as ―the year of the priest,‖ it is appropriate for both Christians and Jews to reflect on what meaning the priesthood continues to hold for us today. Perhaps no other institution is more dissonant with our contemporary democratic and egalitarian culture than is the priesthood. Does it have any constructive role to play in shaping and influencing modern religious life, not only for priests and the ecclesiastical hierarchies themselves, but also for the faithful laity of each community?
It is also important to probe to what degree the priesthood-which historically functioned as a prime border-constructing institution both intra-religiously (i.e., separating holy priests from laypersons within the same community) and inter-religiously (i.e., separating our community from those outside our faith)1 -has implications for relations between Christians and Jews today and in the future.Do the concepts of priesthood and its call to holiness militate against closer relations between us or can they somehow shed light on our commonalities and interrelated missions?
We should not play the role of revolutionary Greenwich Village theologians.Whatever the answers are to the above questions, they need to emerge from an honest and authentic examination of our respective traditions about priests and the priesthood.This does not preclude us finding fresh and constructive new answers for the future, for a theologically consistent extension of past teachings need not be confined to the ways our forefathers lived these concepts in the empirical past.In this context, I offer a thesis touching on both the formal legal institution as well as the spiritual aspirations of the priesthood in Judaism.
The Jewish Priesthood: Definitions and Preliminaries Jewish priests--kohanim‖ in Hebrew-are central functionaries in the divine services and other tasks mandated in Jewish scriptures, particularly in the Pentateuch, the five books of Moses from Genesis through Deuteronomy.Jews know these sacred books as Torah. 2 For the most part, Jewish scriptures delineate priests as a distinct class of males separate from the rest of the Jewish people, and establishes that priestly status derives from paternal heredity.That is, according to the Bible and Jewish tradition, one is a kohen if-and only if-his father is a kohen.
Priests were common to many ancient Near East cultures and religions, and it is clear that the specific Jewish institution of priesthood had its basis in the practices of non-Jewish cultures of biblical times.For example, Genesis 14:18-20 tells us of Melchizedek, king of Salem, who was a priest.Joseph married the daughter of the Egyptian priest of On (Gn 41:45) and Moses' fatherin-law, Jethro, was a Midianite priest (Ex 2:16).The biblical establishment of the Jewish priesthood is a prime example of the general methodology of the Torah.It utilizes institutions and practices common to the pagan cultures surrounding the Israelites, but transforms them, striving to purify them of their idolatrous or immoral elements before commanding them to the Israelite nation.Evidently, the God of the Bible was a superb pedagogue.(S)He understood that the historical people of Israel could not radically divorce themselves from all the cultural forms around them and to which they were accustomed.Hence, God mandated a new religion for Israel by using old forms and investing them with different norms and meanings. 3e ancient Egyptian priests often were seen as possessing divine character.They often owned large tracts of Egyptian land, which led to their economic domination over lay Egyptians.They were also in charge of death rituals, embalming and burials-which gave them enormous power leading to all sorts of political and spiritual extortion over lay Egyptians, who, like us were very much concerned with gaining immortality.(How much would you pay someone to guarantee you immortal life?) Jewish priests, too, had a special holiness, but they were never considered anything other than human administrators.And unlike the pagan priests, they were not permitted to own land or have any contact with the dead.Their place was in the Temple, and both cadavers and graves were forbidden from being anywhere within the Temple precincts.Contact with the dead rendered the kohen ritually impure and disqualified him from performing any Temple ritual.One can see evidence of this ancient prohibition in Jerusalem today.The location of the large ancient Jewish cemetery on the Mount of Olives was chosen because Jews wished to enter the Temple precincts quickly after being resurrected in the messianic era.They were forbidden from being buried in the Temple itself, so the cemetery was placed in the most proximate locationthe Mount of Olives.Thus did the Torah succeed in circumscribing the power the kohen had over non-priestly Israelites.
As administrators, Jewish priests are referred to as ministrants of God (Is 61:6; Jer 33:21-22; Jl 1:9, 2:17, 13; et al.).More importantly, as Ezekiel 44:16 indicates, their purpose is to draw others nearer to God and to worship of the Holy One.In other words, kohanim are merely conduits or channels that aid the striving of every Jew to reach God.
Who qualifies for the priesthood is a perennial question amongst biblical scholars.Only the male progeny of the first high priest, Aaron, who is descended from the tribal father, Levi?All Levites?Every male Israelite?On this point the biblical laws appear contradictory, but the answer is pregnant with theological and spiritual significance, which I will probe later.
The Functions of Jewish Priests 4
The priests were mainly concerned with the Temple ritual in Jerusalem, but they were not solely limited to it.In general, we can identify four types of priestly functions: (1) Temple cultic functions, (2) Mantic functions, that is, functions concerned with solving mysteries of the future or the past and making decisions in uncertain cases through revealing the divine will, (3) Treatment of impurities and diseases-such as leprosy-that involved special ceremonies, and (4) Judging, teaching and blessing the people.
1. Temple Cultic Functions.The most prominent function of Jewish biblical priests was to offer sacrifices on the altar that stood in the Temple court.The priests' activities in this ceremony are described in detail at the beginning of the Book of Leviticus and they fall into two major functions: sprinkling sacrificial blood on the altar and burning portions of sacrifices.These functions were normally performed by the ordinary priests.Aaron, the high priest, did not participate in this function except when special sacrifices were brought by all the priests themselves-such as the sacrifices of the eighth day of investiture, described in Leviticus 9; the daily offering sacrificed from the day of consecration (Lv 6:12-15), and the sin offerings whose blood is brought into the inner temple (Lv 4:3-21, 16:3-25).Significantly, the high priest plays the central role in the ritual of the holiest day of the year, Yom Kippur, the Day of Atonement.On that occasion, the high priest is the one who administers the sin offerings and who enters the Holy of Holies to ask for atonement for the people of Israel.Atonement comes from God after sincere repentance by the sinners, and the high priest is only what we would call today, a -facilitator.‖A second priestly duty was to sound trumpets on special occasions, such as the pilgrimage festivals and the consecration of the new moon. 5The trumpets served as reminders of the sacrifices of Israel before God (Nm 10:10).On the Day of Atonement in the Jubilee Year, it was obligatory to blow a shofar-a trumpet, which was a ram's horn-throughout the land (Lv 25:9), and on the Rosh Hashanah, the New Year, it was obligatory to carry out a "memorial blowing" (Lv 23:24; Nm 29:1).Today, Jews still blow the shofar every year on Rosh Hashanah in every synagogue, but it may be blown by any Jew, not only a kohen.
Another priestly function in this category was carrying the ark that contained the scroll of the Torah when Israel traveled through the desert before entering the land of Canaan, and postentry before the Temple was built in Jerusalem.Deuteronomy (10:8, 25, 31:9) mentions this as one of the distinguishing features of priesthood.And in all the transportations of the ark during the period of the conquest, scriptures mention that the "priests, sons of Levi," were its -bearers‖ (Jos 3:3-17; 4:3, 8:33, 9-10, 16-18).
Other priestly Temple functions included burning the frankincense on the altar (Ex 30:7-9), caring for the lamps (Ex 27:20-21; Lv 24:1-4; Nm 8:1-3), and setting out the showbread on the altartable (Lv 24:5-9).(If this is reminiscent of church ritual today, it is no coincidence.)The inner system of priestly Temple ceremonies is rooted in the fundamental conception of the Temple as God's dwelling place, in which the Holy One, in some mysterious and metaphorical way, "lives."2. Mantic Functions.According to Numbers 27:21, in cases of difficult questions or policy such as deciding to embark on an optional war, the high priest was to consult the Urim V'Tumim-the jewel stones located on the breastplate of the high priest (Ex 28:30; Lv 8:8).In order to obtain a reply, the high priest must enter with the Urim V'Tumim "before God," that is, into the sanctum sanctorum, or Qodesh ha-Qedoshim.The use of the Urim V'Tumim was common in the ancient Israelite priesthood, but it seems from Ezra 2 and Nehemiah 7 that by the Second Temple period, the Urim V'Tumim had been entirely forgotten, and the returnees to Zion in 6th century BCE did not know how to reinstate them.The Urim V'Tumim were consulted when it was necessary to decide between two contradictory possibilities, and a yes or no answer was received.Solution by lots was needed in more complex situations, such as the division of allocated areas of the Promised Land to the tribes of Israel.The most famous decision by lots was the selection of the scapegoat of the Day of Atonement, in which the high priest did the casting (Lv 16:7-10).
Priests would also conduct ordeals to resolve doubtful cases.These ceremonies were held by the priest in the court of the sanctuary.One example is the case of a suspected adulteress as described in Numbers 5:11-31.This practice was also discontinued even before the Temple was destroyed.
3. Treatment of Impurity: Purification and Apotropaic Rites.In the ancient Near East, diseases and plagues were viewed not simply as an organic-physiological phenomena, but embodiments of inner spiritual defects coming to rest in the body.Healing was performed either by waiting until the impurity left the body or by purification rituals to hasten its exit.The Bible instructs that priests are the ones to deal with these impurities or diseases.A prophet could heal leprosy, but only by some miraculous action (Nm 12:13; 2 Kgs 5:1-15; cf.Ex 4:6-8).But the regular and systematic cure was in the hands of the priests.Deuteronomy (24:8 cf.21:5) admonishes the people to follow carefully the instructions of the priests pertaining to these matters.This aspect of priestly activity is described in biblical passages dealing with impurities of animals and carcasses (Lv 11), leprosy (Lv 13-14), bodily emissions (Lv 15), and laws concerning impurity of the dead (Nm 19).
4. Judging, Teaching, and Blessing the People.Kohanim also judged.Although this was generally a function of the elders and heads of families, in some towns, priests would participate in judging together with the elders.If a difficult case required higher expertise, Deuteronomy (17:8-13) enjoins the litigants to go up to the chosen city (i.e., Jerusalem) and be judged there, although the assumption is that judging there is in the hands of both the priests and the judges (Dt 17:9, 19:17).
Deuteronomy 21:5 requires that "every law suit" be decided by the priests, but this seems to be only a generalized mode of speech.Apparently the description contained in Deuteronomy essentially reflects actual historical reality according to which the priests participated in judicial authority.As a piece of history, I Samuel 4:18 tells us that Eli the priest achieved the status of a great judge of Israel.Ezekiel says of the priests that "in controversy they shall act as judges" (Ez 44:24).
Importantly, the priests also served as teachers of Torah to the people.This function is mentioned as early as the blessing of Moses found in Deuteronomy 33:10: -They shall teach Jacob thy ordinances, and Israel thy Torah.They shall put incense before thy nostrils, and whole burntoffering upon thy altar.‖Individual priests rotated their service time in the Temple, with each deployment lasting only three months of the calendar year.The Talmud contends that in the other nine months of the year, the kohanim taught Torah to the people.Sometimes the priests' teaching did not exist as a special institution, but was a by-product of their other activities.Thus, Torah followed from the legal and moral discussions held before the priests (Dt 17:11, 33:10).Torah was also taught by way of guidance given by the priests to the people in matters of impurities and diseases (Dt 24:8; Hg 2:11ff.).Indeed, the various types of laws of impurity themselves were called -torah‖ (Lv 11:46, 13:59, et al.) and were to be learned by the public (Lv 10:10-1).Related to this teaching function, the priests were entrusted with preserving the scrolls of the Torah.
The final-and for our purposes most significant-function of the kohanim was offering blessings to the people.The mandate to bless the people occurs on different occasions and in a number of places in the Bible (Lv 9:22; Dt 27:12-26; Jos 8:33-34), but most prominently in the imperative found in Numbers 6:24-26.Says the Lord: This blessing was recited every morning in the Temple.It is important to stress that the text of this blessing clearly indicates that blessing comes through the priest, not from the priest.It is God-and God alone-who is the source of all blessing.The priest is merely a conduit of that divine gift that God bestows upon his children.
The Priesthood after the Destruction of the Temple
When the Romans destroyed the second Jerusalem Temple in 70 CE, of course the Temple sacrifices and purity/impurity rituals were discontinued.As a result, the cultic, mantic and purity/impurity functions of the kohanim also came to an end.Concurrent with this was a democratization process throughout Jewish religious life.The Pharisees and their tradition, from which Jesus emerged and which later became normative rabbinic Judaism, de-emphasized hereditary privilege in Jewish society.Merit-particularly that of Torah scholarship-eclipsed authority derived from pedigree.The famous Pharisaic statement, -A learned bastard takes priority over the ignorant high priest‖ (Mamzer talmid hakham kodem l'kohen gadol am ha'arets)6 became the religious and social organizational principle after the Temple was destroyed.Moreover, the primary teaching function in Israel was transferred to the Pharisaic rabbis.
However, a few priestly functions-as opposed to enduring priestly restrictions and privilegesdid continue and persist until today.The most prominent is the act of contemporary kohanim blessing the people of Israel. 7Today in the Diaspora, during every major holiday the kohanim of each Jewish community rise, cover themselves with their prayer shawls, spread out their arms and fingers in a special configuration, and bless the community with the beautiful blessing from Numbers 6. Again: May the Lord bless you and keep you; May the Lord cause His face to shine upon you and be gracious unto you; May the Lord lift up His countenance upon you and grant you peace.
In response to each part of the threefold blessing, the community responds -May it be Thy will.‖In the land of Israel, the kohanim perform this function each and every Sabbath in addition to the holidays.And in Jerusalem, the holiest of Jewish places, the priests recite it every day at the end of the morning service.
It is precisely this practice of priestly blessing that provides a key to the essence of the eternal importance of the priesthood.Indeed, I believe that it illuminates the divine mission of all Israeland perhaps even Christianity-in sacred history.I would like to explore this with you for the remainder of this proceeding.
The Priestly Blessing Today I mentioned earlier that there is a question about who the Bible regards as fit for priestly function: Only the particular subset of the Jewish people who are sons of Aaron from the tribe of Levi and their descendants, or every Israelite?The most important place where the Bible implies that all of Israel should function as priests is Exodus 19:5-6.Immediately before revelation at Sinai, God and the Jewish people commit themselves to be partners in the Mosaic covenant.God proclaims: If you will faithfully obey Me and keep My covenant, you shall be My treasured possession among all people.All the earth is Mine, but you shall be for Me a kingdom of priests and a holy nation.
The implication of this idea is revolutionary.If the function of a priest is to bestow God's blessings upon others, and all Israel is to be a -kingdom of priests,‖ then it can only be the gentile nations of the world who Israel is called upon to bless.Hence some traditional Jewish theologians like Rabbis Obadiah ben Jacob Seforno 8 and Samson Raphael Hirsch 9 identified this Sinaitic priestly calling as the mandate to spread blessing by teaching the world about God and divine moral values.Indeed, this universal calling is the meaning of Jewish election at Sinai, the very reason for Israel's covenant and religious existence.One early 20th century rabbinic authority, Naftali Zvi Yehudah Berlin, went so far as to claim that in establishing the covenant with Israel at Sinai, God completed His plan for all of creation that began in Genesis. 10 Election of Israel is the culmination of creation, not because Jews are the center of the universe, but because Sinai charged the Jewish people to be teachers of all humanity, instructing all people of God's authority over creation and His moral rules for human social order.In other words, Israel was created for the world, not the world for Israel.
The prophet Isaiah poetically expresses in God's name this same universal calling of Israel: 8 15 th -16 th Italy; his commentary ad loc. 918 th century Germany; his commentary ad loc. 10 His commentary on the Pentateuch, Ha-Ameq Davar, Introduction to the Book of Exodus.
Korn, Borders and Beyond
Korn CP 7 http://escholarship.bc.edu/scjr/vol5I will establish you as a covenant of the people, for a Light of the Nations…Behold, darkness shall cover the earth, and a thick darkness the nations.But God will shine upon you.Nations shall then go by your light and kings by your illumination (42:6, 60:2-3).
The Jewish -nation of priests‖ will illuminate the world.
The nexus of priesthood and universal blessing cogently explains the spiritual connection between Abraham, who is understood by Jewish tradition as the first Jew and who the rabbis identified as a type of priest,11 with his later descendants who became obligated in the Mosaic commandments after revelation at Sinai.God's original charge to Abraham in Genesis 12:2-3 was to -be a blessing, and through you all the nations of the earth shall be blessed.‖ What is the content of this blessing, of this light?Jewish theological tradition understood Abraham to have assumed the responsibility to be the witness to God's presence in Heaven and on Earth,12 and, as indicated in Genesis 18:19: -to teach the way of the Lord, to do righteousness and justice‖ (tsedakah u-mishpat).That is, Abraham, his immediate family and his descendants for eternity-the Jewish people-are tasked with the mission of bringing God's blessing to all of humanity and the divine light of the fundamental moral values of righteousness and justice to every corner of creation.
Drawing on this Jewish concept, the Catholic Church likewise considers herself to have assumed this collective priestly function.This is clear in the first letter of Peter (I Pt 2:9), who stated that the whole church is -a chosen race, a royal priesthood.‖This idea was reiterated in the Second Vatican Council's document, Lumen Gentium (Light of the Nations): -The baptized, by regeneration and anointing of the Holy Spirit, are consecrated to be a spiritual house and a holy priesthood.‖13 Can Judaism possibly agree to this claim of priesthood by the Church?Does it not inevitably require conceding that the Jewish people have been superseded by the Church as God's chosen people?Does it also not entail dropping the fervent Jewish conviction that the Jewish people are still in living covenant with the Creator of Heaven and Earth?I believe that Judaism can-and should--agree to this claim of the Church, even while Jews must insist that, qua Jews, they remain in living covenant with God.
It is noteworthy that a number of rabbinical authorities and Jewish thinkers in the modern eraall quite -Orthodox,‖ I may add-have described the historical influence and mission of Christianity as identical to the original mission of Abraham, namely bringing the presence of God and His transcendent morality to the world.Here are two examples: Korn, Borders and Beyond Korn CP 8 http://escholarship.bc.edu/scjr/vol5 Rabbi Jacob Emden in 18th century Germany stated: Christians removed idols (from the nations) and obligated them in the seven moral commandments of Noah so that they would not behave like animals of the field.Christians instilled firmly the nations with moral traits…The goal of Christians [and Moslems] is to promote Godliness among the nations...to make known that there is a Ruler in Heaven and Earth. 14d Rabbi Samson Raphael Hirsch in 19th century Germany proclaimed: The peoples in whose midst the Jews are now living [i.e., Christians] have accepted the Jewish Bible of the Old Testament as a book of Divine revelation.They profess their belief in the God of heaven and earth as proclaimed in the Bible and they acknowledge the sovereignty of Divine Providence...Judaism produced an offshoot [Christianity]…in order to bring to the world-sunk in idol worship, violence, immorality and the degradation of man-at least the tidings of the One Alone. 15ere would the world be without Christianity and its vast influence?Still steeped in rank idolatry and pagan immorality, according to these rabbinic leaders.In effect, these rabbis saw Christianity as playing a role in the covenantal calling that God made to Abraham, that he function as a priest and bring blessing to all the nations of the earth: -Through you all of the nations of the earth shall be blessed!‖(Gn 12:3).
If this is so, Jews can view Christians as partners in conveying the priestly blessing of divinity and morality to the world.In this conception, Christians and Jews would co-exist as two independent -nations of priests,‖ each working differently toward the same end of God's plan for sacred history.This is a new claim for Jewish theology and a relatively new claim for Christian theology, which until recently had always insisted that the Church had superseded and completely replaced the Jewish people as the people of God.
Jews and Christians: Priests to the World If we are true to the Bible's account of Abraham and God's challenge to him, we must admit that the Bible does not portray Abraham as a theologian. 16It describes Abraham as a man of faith, of action and of morality.His calling as priest, therefore, should above all denote a commitment to practical action in sacred human history.And it is precisely today that the practical teachings of Abraham and our priestly calling to the world are particularly urgent.
At the dawn of the 21st century, human beings face awesome and terrifying possibilities.We have the tools to improve and protect human life as never before-and we have the means to destroy all human life and God's creation.Civilization as we know it stands on the edge of a precipice.Our values, choices and behavior will spell the difference between a future of blessing and a hellish future in which the world descends into its primordial chaos.After witnessing the Nazi Holocaust, the genocides and democides of the past century, any naiveté or complacency on our part are religious sins.The horrors of the 20th century have taught us that radical evil was real then, and it remains an ever-present potentiality for today and the future.As partners exercising priestly function, the ethical imperative -to do the right and the good,‖ must be foremost in our behavior and theology.We must understand deeply that there is no justification for any teleological suspension of the ethical-whether the telos is theological, political, financial or personal.The moral imperative, as both the Bible and Kant insisted 17 , must be categorical.
A number of troubling signs powerfully dominate our cultural and political landscapes.Postmodern secularism has created a pervasive value-orientation whose foundations contain the seeds from which destructive forces can again grow.Hedonism drives much of contemporary life and ethos.Violence saturates our media and popular culture, sometimes appearing as merely another justified form of pleasure.This contributes to the evisceration of moral concern and the numbing of individual conscience, both of which are essential to human flourishing and individual dignity.
Moral utilitarianism has also made a comeback in contemporary academia and high culture.In this ethic, human life possesses no intrinsic value.Individual human life too often becomes a commodity to be traded-and sometimes even discarded.This moral philosophy shares the Nazi denial of the fundamental axiom of Judeo-Christian ethics, namely that all persons are created in Imago Dei, God's Image, and hence that each person's life has non-quantifiable sacred value.
Because relativism has become one of the most accepted moral theories in our time, objectivity and moral absolutes are under ferocious attack.The belief that there is no objective bar by which to measure human actions easily slips into the belief that there is no bar at all for valid moral judgment.And from there, it is but a small step to the denial of ethics entirely.In the political theater, a radical and intolerant Islamist monism has grown into a common threat to Judaism and Christianity and to moderate Muslims around the world as well.It denies Jewish and Christian legitimacy in the Middle East and by implication tolerance of all religious diversity.
Finally, irrational religious extremism has become a potent force in both world politics and religious identity.Although the 21st century is but in its infancy, we have already seen too much violence and mass slaughter committed in the name of God.All these phenomena are frightening dangers and call Jews and Christians to joint action.
Today, Jews and Christians play an essential role in sacred God's plan for human progress in history-indeed for the survival of humanity.We do this together by being nations of priests and bearing public witness to God and his values.As partners in Abraham's priestly mission, we are spiritually obligated to heed the divine call of bringing blessing to the world and to be charismatic peoples, message bearing peoples.
Here is how I see our common testimony: 1.There is a spiritual center to the universe because the world was created by a loving God, who is intimately involved in human lives and who yearns to redeem His children.
Jews and Christians should be unembarrassed about teaching this reality, as was Abraham when he taught his peers about -the God of Heaven and Earth.‖ 2. As the Creator of all, God is the transcendent authority over human life, and he establishes the validity of moral values.Although sometimes difficult to apply, moral values are neither relative nor human conventions, but intrinsic parts of the universe that are essential for human flourishing.The fundamental moral values of righteousness and justice must remain primary to all human endeavors.
3. All persons are created in Imago Dei, the Image of God, and every human being has intrinsic sanctity that derives from this transcendent quality.Therefore all persons possess inherent dignity and much be treated as such.Moreover, the spiritual essence of each person ensures that individual human life is not a process of biological decay toward death but a journey of spiritual growth toward life.Because human life has this transcendent character, human worth cannot be measured solely in utilitarian, social or materialistic terms.And because every person is created in the Divine Image, any assault on innocent human life is an assault on God that diminishes the Divine Presence in our world.
4. Abraham learned from his trial of the binding of Isaac that God loves human life and abhors death.Thus, Abraham's covenantal children must teach that killing innocent persons in the name of God is contrary to the God of our scriptures, and all forms of religious violence are idolatries that the world must reject.
5. As Abraham defended justice and righteousness before the destruction of Sodom and Gomorrah, his children are duty bound to teach social justice and display individual righteousness.It was only Abraham's moral protest to God and concern for the moral treatment of others that distinguished his righteousness from Noah's self-righteousness, and that earned him the privilege to be the father of God's covenantal people.Our commitment to justice and righteousness for all human creatures is the test of our fidelity to our priestly calling that is designed to bring peace and harmony to the world.
6. Lastly, as faithful Christians and Jews believing in messianic history, we must teach the eternal possibility of human progress and moral reform.We cannot fall prey to pessimism, nihilism or a Malthusian acceptance of war, disease and oppression as permanent features of human destiny. 18Hope in possibility of a peaceful humanity is the meaning of our messianic belief.
Critical theological differences remain-and should always remain-between Judaism and Christianity.Yet both of our faiths demand belief in messianic history and action to make our world a place where God can enter.We share the priestly task to bless the world, to make it a better place, where moral values are real, where human affairs reflect a spiritual center, and where every human life is endowed with meaning.
The prophet Micah offers a stunning description of that time when history culminates in the blessings of the messianic era: Let us go up to the mountain of the Lord and the God of Jacob, that He teach us His ways, and we will walk in His paths.…Letall the peoples beat their swords into plowshares and their spears into pruning hooks.Nations shall not lift up sword against nation, nor shall they learn war anymore.Let every man sit under his vine and under his fig tree; Thus you shall bless the children of Israel: -May the Lord bless you and keep you; May the Lord cause His face to shine on you and be gracious to you; May the Lord lift up His countenance upon you and grant you peace."So they are to invoke My name upon the Israelites, and I will bless them. | 7,001.4 | 2011-04-21T00:00:00.000 | [
"Philosophy",
"Political Science"
] |
Embedding theorems into Lipschitz and BMO spaces and applications to quasilinear subelliptic differential equations
This paper proves Harnack's inequality for solutions to a class of quasilinear subelliptic differential equations. The proof relies on various embedding theorems into nonisotropic Lipschitz and BMO spaces associated with the vector fields $X_{1},\ldots, X_{m}$ satisfying Hormander's condition. The nonlinear subelliptic equations under study include the important p-sub-Laplacian equation, e.g.,
$$
\sum_{j=1}^{m}X_{j}^{*}\left(|Xu|^{p-2}X_{j}u\right) =A|Xu|^{p}+B|Xu|^{p-1}+C|u|^{p-1}+D,\\ 1<p<\infty
$$
where $|Xu|=\sum_{j=1}^{m}\left(|X_{j}u|^{2}\right)^{\frac{1}{2}}$ and $A$ is a constant; $B$, $C$ and $D$ can be in appropriate function spaces. We note that $A$ can be nonzero.
Introduction
One of the main purposes of this paper is to show various embedding theorems into nonisotropic Lipschitz and BMO spaces associated with the vector fields satisfying Hörmander's condition.The other, more importantly, is to apply some of our new theorems proved here to study the local regularity of certain classes of nonlinear subelliptic PDE formed by vector fields.These nonlinear subelliptic equations studied here include the important p-sub-Laplacian as a special case.
G. Lu
Let Ω be a bounded, open and pathconnected domain in R n , and let X 1 , . . ., X m be a collection of C ∞ real vector fields defined in a neighbourhood of the closure Ω of Ω.For a multi-index α = (i 1 , . . ., i k ), denote by X α the commutator [X i1 , [X i2 , . . ., [X i k−1 , X i k ]], . . ., ] of length k = |α|.Throughout this paper we assume that the vector fields satisfy Hörmander's condition: there exists some positive integer s such that {X α } |α|≤s span the tangent space of R d at each point of Ω.We can define a metric as follows: An admissible path γ is a Lipschitz curve γ : [a, b] → Ω such that there exist functions c i (t), a ≤ t ≤ b, satisfying m i=1 c i (t) 2 ≤ 1 and γ (t) = m i=1 c i (t)X i (γ(t)) for almost every t ∈ [a, b].Then a natural metric on Ω associated to X 1 , . . ., X m is defined by (ξ, η) = min{b ≥ 0 : ∃ an admissible path γ : [0, b] → Ω such that γ(0) = ξ, and γ(b) = η}.
The metric ball is defined by B(ξ, r) = {η : (ξ, η) < r}.This metric is equivalent to the various other metrics defined in the work of Nagel-Stein-Wainger [NSW].Note that the Lebesgue measure is doubling with respect to the metric balls as shown in [NSW].Thus (Ω, ) is a homogeneous space.
By the Rothschild-Stein lifting theorem (see [RoS]), the vector fields where T is the unit ball in R N −d by adding extra variables so that the resulting vector fields are free, i.e., the only linear relation between the commutators of order less than or equal to s at each point of Ω are the antisymmetric and Jacobi's identity.Let G(m, s) be the free Lie algebra of steps with m generators, that is the quotient of the free Lie algebra with m generators by the ideal generated by the commutators of order at least s + 1.Then {X α } |α|≤s are free if and only if d = dim G(m, s).We also define Q = s j=1 jm j where m j is the number of linearly independent commutators of length j.This integer Q is called the homogeneous dimension associated with the vector fields.
We now define the Sobolev space W 1,p (Ω) to be the completion of C ∞ (Ω) under the norm . We also define W 1,p 0 (Ω) as the completion of C ∞ 0 (Ω) under the above norm || • || W 1,p (Ω) .Let us review briefly the known results on embedding theorems, especially Poincaré type inequality for vector fields satisfying Hörmander's condition.We refer the interested reader to, e.g., [CDG1], [FGW] and [L1], for the embedding theorems of Sobolev type (i.e., the functions under consideration are assumed to be with compact support).For embedding theorems on groups, we refer the reader to [FS], [Kra], [Va] and [VS-CC].For nonsmooth vector fields, extensive study has been given in [Fr], [FrL], [FrS] and [FGuW].
Theorem.Let E ⊂⊂ Ω, 1 ≤ p < ∞, then there exist some q = q(p) ≥ p and constants r 0 > 0, C > 0, c ≥ 1, such that for any metric balls B = B(x, r) with cB = B(x, cr) ⊂ Ω, x ∈ E, and any f ∈ Lip 1 (B), the following inequality holds provided 0 < r < r 0 , where C, c, r 0 depend only on E, Ω, f B may be taken to be 1 |B| B f .
Such an inequality was first proved by D. Jerison [Jer] for all 1 ≤ p < ∞ and q = p.The same inequality in the setting of subelliptic operators was proved by Jerison and Sanchez-Calle in [JeS].After the work of [Jer] and [JeS], the author of the present paper improved the result in [J] for p > 1 and extend it to weighted case ([L1]- [L2]).Especially, when 1 < p < Q, it is shown in [L1] and [L2] that q can be taken as 1 ≤ q ≤ Qp Q−p .We remark here that by the Rellich-Kondrachov compact embedding theorem for vector fields satisfying Hörmander's condition (see, e.g., [L4]) and together with a well-known compactness argument (see, e.g., [L5]), one can recapture the proof of the Poincaré inequality with 2B on the right side for all 1 ≤ q < pQ Q−p except the endpoint q = Qp Q−p .However, such a Poincaré inequality usually involves a constant C possibly depending on the ball B in general.
When p = Q, the following inequality was shown in [L3] that for all balls B with cB ⊂ Ω: All the Poincaré type inequalities proved so far are with the restriction 1 p − 1 q ≤ 1 Q .However, if we consider embedding theorems on the
G. Lu
Campanato-Morrey spaces, we will get inequalities with larger differences 1 p − 1 q .To state the theorems proved in [L3], we briefly define the Campanato-Morrey spaces as follows: Let now f B = 1 |B| B f (y) dy be the average over the ball B of the function f .We define the following two types of Campanato-Morrey norms: Fix any R > 0. Let L p,λ (Ω) be the spaces of all functions f ∈ L p loc (Ω) such that where the sup is taken over all the balls B = B(x, r) with cB = B(x, cr) ⊂ Ω with x ∈ E ⊂⊂ Ω for some subset K and ρ(B) = r (the radius of the ball B) ≤ R. It is easy to see that two elements of L p,λ can be identified if they only differ by a constant.
We also define the space M p,λ of functions where the sup is taken in the same sense as above.
Then one of the main theorems proved in [L3] is the following: Theorem.Given any f ∈ W 1,p loc (Ω) the following is true: where 0 < λ ≤ Q, 1 < p < λ and p * = λp λ−p , provided that the number R > 0 is small enough in the definition of the spaces L p,λ (Ω) and M p,λ (Ω).
We note in the above that 1 p − 1 p * = 1 λ can be taken much larger than the known gap in the Poincaré inequality, which is known to be true so far for 1 Q .
One of the main goals of this paper is to show some new embedding theorems for Hörmander's vector fields which will complement the theorems mentioned above.The current theorems shown here together with the previously known ones will give a fairly complete picture of embedding theorems for vector fields of Hörmander's type.More importantly, we will employ these new theorems to prove a Harnack inequality for a certain class of quasilinear subelliptic differential equations formed by vector fields satisfying Hörmander's condition.
We first state the embedding theorems.From now on, we use frequently |Xf| to express Theorem 1.1.Suppose p > Q.Then there exists some constant c ≥ 1 such that for any f ∈ W 1,p (Ω), for any ball B R with cB R ⊂ Ω we have provided that one of the metric balls B(x, (x, y)) and B(y, (y, x)) is contained in Ω.
Remark.If we assume f ∈ W 1,p 0 (Ω), p > Q, then we can show f ∈ C 0,γ (Ω), i.e., sup where C = C(Q, α).Moreover, for any compact subset K ⊂ Ω there exists r 0 > 0 , we have sup x,y∈K,x =y, (x,y)≤r0 Theorem 1.3.Given any 1 ≤ p < ∞ and c ≥ 1. Suppose f ∈ W 1,p (Ω) and also that there exists a positive constant K such that for all balls B R ⊂ Ω.Then there exist positive constants σ and C such that for all balls We remark here that Theorems (1.2) and (1.3) do not involve the homogeneous dimension Q, both the theorems and proofs work in more general settings, say, for Grushin or nonsmooth vector fields (see [FGuW]).
By employing the above theorems when 1 < p < ∞, we shall establish certain Harnack inequalities for weak solutions, subsolutions, and supersolutions of quasilinear second order subelliptic partial differential equations of the form (1.4) where X * j is the adjoint of X j , which is not necessarily a vector field in general; u(x) is assumed to be in W 1,p loc (Ω).As a special case of our theorems, we will be able to obtain the local regularity for the well-known sub-Laplacian.
The structure of the equation (1.4) throughout this paper will be assumed to satisfy the following: are nonnegative measurable functions satisfying certain integrability properties which will be described in Section 3.Such type of equations when in the literature (see [Ser], [GiT], [Tru], [Zie]).We point out here that the equation (1.4) has been studied in [CDG1] when p is restricted to 1 < p ≤ Q under the assumption of b 0 = 0. Our theorems proved in this paper include all 1 < p < ∞ and also b 0 = 0.Moreover, the results in [CDG1] require higher integrability conditions on the coefficients a i (x)(i = 1, 2, 3, 4), b j (x)(j = 1, 2, 3) than the ones given here (see Section 3 for details).For example, by our theorems in Section 3 the solutions of, e.g., the following very simple equation for all 1 < p < ∞ satisfies a uniform Harnack inequality: , A is a constant; B, C and D are in appropriate function spaces which will be specified below.We should also mention that when 1 < p ≤ Q we shall assume the solutions are a priori bounded (when b 0 = 0 such an assumption can be dropped, see Section 3) while when p > Q the local boundedness and Hölder continuity of the solutions follows by the embedding Theorem (1.1) proved in this paper without obtaining Harnack inequality first.However, one still needs to prove the Harnack inequality for p > Q because Hölder continuity of the solutions does not lead to this.
We also remark that the proofs of the Harnack inequalities for the solutions of the equation (1.4) rely on Sobolev embedding theorems (see for example [L1], [FGW]) and embedding theorems into Lipschitz and BMO spaces proved here.We will also need to adapt the well-known Moser's iteration argument [Mos] to our nonlinear subelliptic case.For G. Lu elliptic Euclidean case, we refer the interested reader to [LaU], [Mos], [Nas], [Ser], [GiT], [Tru], [Zie] and references therein.Our Harnack inequalities extend to the subelliptic context results due to J. Serrin, N. Trudinger, Ladyzhenskaya and Ural'tseva (see [Ser], [Tru], [LaU]).We also mention that subelliptic variational problems have been studied by Xu in [X1].
The organization of the paper is as follows: Section 2 contains the proofs of the embedding theorems which will be needed in proving the Harnack inequality.Section 3 devotes the proof of the Harnack inequality, Hölder continuity, and estimates of the solutions at the boundary.
We will use the letters C, c, etc., to denote the absolute constants and may differ from line to line.
Proofs of Theorems (1.1), (1.2) and (1.3)
We recall again that by the Rothschild-Stein lifting theorem (see [RoS]) the vector fields There is also a metric ˜ : Ω × Ω → R + associated with the lifted vector fields X1 , . . ., Xm .We note that the Lebesgue measure of the ball | B(ξ, r)| ≈ r Q , where Q is the homogeneous dimension of G, and B(ξ, r) is the metric ball in ( Ω, ˜ ).Thus ( Ω, ˜ ) is a homogeneous space in the sense of Coifman and Weiss.We should mention the proofs given in this section are not the simpliest ones.
The following lemma is necessary in order to show Theorem (1.1).
Lemma 2.1.Given any metric ball B ⊂ Ω and any Lipschitz continuous function f ∈ Lip 1 ( Ω).Then there exist constants c ≥ 1 and C ≥ 1 such that for any ξ ∈ B and any constant C 0 the following is true: and ˜ (ξ, η) is the metric distance associated to the lifted vector fields { Xi } m i=1 ; M (g) is the Hardy-Littlewood maximal function for g.This lemma was essentially proved in [L1] (Lemma (3.2) in [L1]).In [L1] it was shown that there is a constant But we can show such a constant C B can be replaced by f B .Moreover, if we replace the function f by f −C 0 , we will get Lemma (2.1).Actually in the proof below we will take C 0 = fc B .
Remark.In the above representation formula, it contains the Hardy-Littlewood maximal function and also the zero order term | f −C 0 |.Such a formula is good enough for most L p estimates for p > 1 as demonstrated in [L1]- [L3].The proof of Theorem (1.1) given below by using this formula is interesting itself when we get rid of the Maximal function by using the boundedness of the maximal function in L p norm and control the terms containing the zero-order term | f − C 0 | by using the known Poincaré inequality from L p to L p .(See the similar argument in [L3].)Of course, the proof can be much simplified by using the new representation formula obtained in [FLW].We thought the proof of Theorem (1.1) given below may have its own interest.
Before we start to prove the Theorems (1.1), (1.2) and (1.3), we briefly explain how the proofs go.We will first prove the theorems for free vector fields { Xi } and the functions f defined on Ω.Secondly, for any function f defined on Ω which satisfies the assumptions in the theorems associated with the vector fields {X i }, we define the new function f (ξ) = f (x, t) = f (x) for x ∈ Ω and ξ ∈ Ω and we prove for so defined f it satisfies the conditions associated with the lifted vector fields { Xi }.Thirdly, we then show the conclusions of the theorems for so defined f will lead to the conclusions for the original function f .
We also mention that on the nilpotent Lie group some similar results to our Theorem (1.1) were derived in [Fol], [SC], [Cou] and [Kra].
Proof of Theorem (1.1):
We first show that the theorem holds for f ∈ Lip 1 ( Ω).The general case follows by an argument of approximation.
Given any ball B ⊂ Ω, and any f
In the above we have used the Hölder's inequality in the second inequality, the L p boundedness of the Hardy-Littlewood maximal function in the third inequality and also the Poincaré inequality in the fourth inequality.So sup Now given any function f ∈ W 1,p (Ω) and p > Q, and any metric ball B = B(x 0 , r) ⊂ Ω, and any x, y ∈ B, we can define the ball B = B((x 0 , 0), r) ⊂ Ω.Then ξ = (x, 0) and η = (y, 0) We now consider any x, y ∈ Ω.If we assume that either B = B(x, ρ(x, y)) or B = B(y, ρ(x, y)) is contained in Ω, we then get Therefore, the assertions in Theorem (1.1) follow.
The compact embedding follows easily from the Ascoli-Arzela Theorem.We omit the details here.
We now turn to the proof of Theorem (1.2).We first state the following lemma: Lemma 2.2.Given any metric ball B = B(ξ 0 , r) ⊂ Ω and any Lipschitz continuous function f ∈ Lip 1 ( Ω).Then there exist constants c ≥ 1 and C ≥ 1 such that for any ξ ∈ B the following is true: and ˜ (ξ, η) is the metric distance associated to the vector fields { Xi } m i=1 .
For general free vector fields of Hörmander type, this lemma is a recent result showed in [FLW] (even for the original vector fields, the above formula was also proved in [FLW], but we do not need that version here).We also remark here that such a representation formula for functions compactly supported in the ball is immediate by the fundamental solution estimate for the sum of squares (see [FeS], [FeP], [NSW], [San]).
Since we do not have the adapted "Polar Coordinates" in the setting of Hörmander vector fields, we will prove the theorem by cutting the kernel on metric "annulus".
Proof of Theorem (1.2):
We only prove the theorem for the case p = 1.The general case for p > 1 follows by the Hölder inequality by observing that 1 Given any metric ball B = B(ξ 0 , R) ⊂ Ω.Note for any ξ ∈ B, B ⊂ B(ξ, cR) for some absolute constant c ≥ 1.Then by Lemma (2.2) for Reasoning as in the proof of Theorem (1.1), we can actually show For any given ball B ⊂ Ω and function f ∈ W 1,p (Ω), as in the proof of Theorem (1.1), we define the new function f (ξ) = f (x) for ξ = (x, t) ∈ Ω, where x ∈ Ω, and the ball B. It is easy to check that the condition for the function f for all balls B R ⊂ Ω for the original vector fields will lead to the same condition for the function f defined by for all balls BR ⊂ Ω.Thus arguing as in the proof of Theorem (1.1), we get the desired result.
Proof of Theorem (1.3):
There are several ways to derive this lemma.One simplest way is to use the L p to L p Poincaré inequlity, and then use the known John-Nirenberg Theorem.But we will give the proof here without using the known Poincaré inequlity at all.
Again, we will cut the kernel on the metric annuli.Recall for ξ ∈ B = B(ξ 0 , R) ⊂ Ω the following holds: Note for any given q ≥ 1 we have, The last inequality above follows by the Hölder inequality.We also note On the other hand, by noticing B ⊂ B(ξ, cR) for any ξ ∈ B, we get Therefore, This inequality holds for all q ≥ 1, thus we have shown that provided µ (independent of B and f ) is not too large.The above inequality says For any given function f and ball B = B(x 0 , r) ⊂ Ω, we define B = B((x 0 , 0), r) ⊂ Ω, f (x, t) = f (x).By using the following fact proved in
The Harnack inequalities
We will establish in this section certain Harnack inequalities for weak solutions, subsolutions, and supersolutions of quasilinear second order subelliptic partial differential equations of the form (1.4) under the structural conditions (3.1) below on the equation (1.4).
The structure of the equation (1.4) throughout this paper will be assumed to satisfy the following: p where p > 1, a 0 , b 0 are constants, a i (x), b i (x) are nonnegative measurable functions satisfying certain integrability properties which will be described below.
We now define the notion of solutions, subsolutions and supersolutions of the equations (1.4).A function u(x) is said to be a weak solution (subsolution, or supersolution) of (1.4) in Ω if u(x) ∈ W 1,p loc (Ω) and (3.2) We note here that if (3.2) holds for all φ(x) ≥ 0, φ(x) ∈ C 1 0 (Ω) and , then a standard argument of approximation will show that it still holds for all φ(x) given in the definition.
We now let (ρ) be a smooth function defined for ρ > 0 and such that (ρ) → 0 as ρ → 0. We also define the space L Q, (ρ) by We assume the functions a i (x), b j (x) in the structure condition (3.1) in such space with certain (ρ).More precisely, we will assume when p < Q that (Ω) for some α > 0, i = 2, 4; j = 1, 2, 3 and and we in this case set B = B 3ρ (x 0 ) and (3.4) If p > Q we assume that all a i , b j are in L p (Ω) and set (3.6) Remark.If we only assume (ρ) > 0 satisfies a certain Dini condition, i.e., 1 0 (ρ) ρ dρ < ∞, then the proofs of all the theorems below still hold with minimal modifications.
Besides the embedding theorems proved in Section 1, we also need the following lemma to prove the Harnack inequality.
Remark.When p < Q, if we only assume f ∈ L Q loc (Ω) but assume the L Q norm is small then this lemma still holds as one can see from the proof given below.
Proof: We first assume p < Q.Given each fixed small enough r > 0.
Then we can find a partition of unity of the domain Ω.More precisely, there exists a finite sequence of metric balls We note that we have used the following Sobolev inequality since u i (x) has support in B i (see, for example, Theorem C in [L1]) If we replace the constant Cr αp by > 0 we will get our proof.We note the precise constant C(r, p, Q, α) can be calculated.Now let p > Q, we use the same partition of unity as above.Then We note again supp{u i } ⊂ B i and p > Q, then we have by Theorem (1.1) in Section 1, u i (x) ∈ L ∞ (B i ) and its norm is bounded by Then by setting = Cr p−Q • ||f || p p,Ω we will get the proof.When p = Q, u i is exponentially integrable as shown in [L3] and especially in L t loc for all t > Q.We now assume f ∈ L t loc (Ω) for some t > Q, then arguing as above Taking Cr ||f || Q t,Ω = , we will get the desired result.All the results proved in this paper will be of local nature.We will simply denote a ball of radius ρ as B ρ and drop the center in the notation because the centers are not important here.Theorem 3.9.Suppose that u(x) is a nonnegative weak solution of (1.4) in a metric ball B 3ρ ⊂ Ω with 0 ≤ u < M in B 3ρ .Then where For the standard Harnack inequality stated below to hold, we need to assume that a 3 (x), a 4 (x), b 3 (x) = 0.
Corollary 3.11.Suppose that u(x) is a nonnegative weak solution of (1.4) in a metric ball where The special case of our theorem, i.e., b 0 = 0 has been found in [CDG1] when 1 < p ≤ Q, but with stronger assumptions on the coefficients a i (x) and b j (x).In this case b 0 = 0, we do not need to assume the boundedness of u(x) provided that the functions in the structure conditions (3.1) do not depend on M (since b 0 M = 0).We treat all the cases 1 < p < ∞ here in a unified way.One of the main features is the availability of the new embedding theorem proved in this paper.
For the weak supsolutions of (1.4) we have the following weak Harnack inequality.
Theorem 3.13.Suppose that u(x) is a weak supsolution of (1.4) in a metric ball B 3ρ ⊂ Ω with 0 ≤ u < M in B 3ρ .Then For the weak subsolutions of (1.4) we have the following estimate: Theorem 3.15.Suppose that u(x) is a weak subsolution of (1.4) in a metric ball B 3ρ ⊂ Ω with 0 ≤ u < M in B 3ρ .Then for any γ > p − 1, where C = C(p, Q, a 0 , b 0 M, λρ).
We remark here that Theorem (3.15) also holds for p = 1 as one can see from the proof below.It is clear that Theorem (3.9) is a consequence of Theorems (3.13) and (3.15).
The proofs of the above theorems adapt the well-known iteration argument of Moser [Mos].More closely related arguments can be found in [Ser], [GiT], [Tru] and citeZie.We now define the functional Consequently, the inequalities (3.10), (3.14) and (3.16) may be written as Before we prove all the Harnack inequalities we first make the following reductions.We define Thus u(x) will satisfy an equation of the form (1.4) where A(x, u, η) and B(x, u, η) satisfies the following conditions: (3.20) Therefore this reduces the structure conditions to the cases a 3 (x) = b 3 (x) = a 4 (x) = 0, i.e., m(ρ) = 0.For simplicity we will also drop the "bar" from A, B, u(x), a 1 (x), a 2 (x), b 2 (x) and simply write A, B, u(x), a 1 (x), a 2 (x), b 2 (x).
In the following calculations, it will be understood that q > 0 when u satisfies the hypothesis of Theorem (3.15) and that q < 0 when u satisfies the hypothesis of Theorem (3.13).
By employing the structure condition (3.20) and together with (3.23), we get (3.24)B3ρ e (sgn q)b0u ξ p (b 0 u q + |q|u q−1 )|Xu| p ≤ B3ρ e (sgn q)b0u ξ p (b 0 u + |q|)a p 2 u p+q−1 We note the term b 0 B3ρ e (sgn q)b0u ξ p u q |Xu| p can be dropped from both sides of (3.24).After calculation and Hölder's inequality and together with the estimate 0 ≤ b 0 u < b 0 M , we can bootstrap the terms involving |Xu| and we will get , then by Lemma (3.7) (ξ plays the role of u there) and the assumptions on a i (x) and b j (x) (i, j = 1, 2) we get where C depends on λρ (see definition of λ at the beginning of this section) and etc.
We consider the case q = 1−p in (3.28).By Sobolev embedding lemma (Theorem C in [L1]), and the exponential integrability when p = Q and Theorem (1.1) when p > Q in Section 1, we get as a cut-off function such that ξ(x) = 1 on B r1 and ξ(x) = 0 outside B r2 and |ξ(x)| ≤ C(r 2 − r 1 ) −1 .The existence of such a cut-off function was proved in [L1]. (3.30) We note here that r 1 , r 2 , ρ are comparable, and also note that the Lebesgue measure is doubling with respect to the metric balls by the work of [NSW].Thus by taking t-th root of both sides of (3.30) and setting s = pt = p + q − 1, we will get the following for positive s while for negative s we get We now fix some s 0 > 0 and define s = s j = χ j s 0 , r j = (1 + 2 −j ) ρ, j = 0, 1, 2, . . .
We assume s 0 is so selected that no s j will coincide with p − 1 for otherwise s = s j = p − 1 and q = 0. Therefore 1 + |q| −1 < C for all j.
By (3.31) we obtain (3.33) We have used the fact that χ > 1 and then the corresponding series in the above converges.
If we let j → ∞ we will get It is clear then for any s 0 = γ > p − 1, (3.34) holds and then we have shown Theorem (3.15).Actually, Theorem (3.15) also holds when p = 1 because in the above proof s 0 is allowed to be any positive number.
Suppose now that u(x) is a supersolution, (3.33) holds for any s 0 > 0 and s j < p − 1 and thus We note that the iteration of (3.32) will lead to for any s 0 > 0. Therefore, if we can show that there exists some s 0 > 0 such that then we will have proved Theorem (3.13).
We now let B r be any ball contained in B ρ0 and choose ξ(x) such that ξ(x) = 1 on B r and 0 outside B 2r and |ξ(x)| ≤ Cr −1 .Then we get where v is as in (3.28) when q = p − 1.Thus Theorem (3.9) and (3.13) will follow from Theorem (1.3) in Section 1.
One application of the above theorem is the Hölder continuity of the weak solutions of (1.4).
Theorem 3.37.Suppose that u(x) is a weak solution of (1.4) in Ω which is also locally bounded.Then u(x) is Hölder continuous in Ω and if B ρ0 ⊂ Ω then The proof of the above theorem is fairly standard and we omit the details.
We now consider the estimates of the solutions at the boundary of the certain domains.
Let S be a subset of ∂Ω and u(x) ∈ W 1,p loc (Ω).Then we say that u ≤ D on S if for every > 0 there is a neighborhood of S, called M S , such that u ≤ D + a.e. in Ω M S .With such a definition we may easily define the notions sup S u, inf S u and osc S u = sup S u − inf S u.
If every point of ∂Ω is regular we say ∂Ω is regular, and it is called "uniformly regular" if ρ 0 and θ 0 can be selected independent of x 0 .We also state a theorem which is an extension of Theorem (3.37) to the boundary.Theorem 3.46.Suppose that u(x) is a weak solution of (3.39) in Ω which is also locally bounded.Let x 0 ∈ Ω be regular.Then for any ρ ≤ ρ 0 , γ < 1, All the proofs of the above Theorems (3.41)-(3.46)follow by modifying the proofs of corresponding Theorems (3.9)-(3.15)and (3.37) and we omit the details.One needs to assume that all the vector fields are well defined and satisfying the Hörmander's condition in a larger domain Ω 1 containing Ω so that all the embedding theorems hold for those balls considered in the theorems.
After the paper was written and first circulated (with a slightly longer title) in February 1994, we learnt that some related work on Poincare estimates has also been obtained in [BM], [MS], [Cou2], [HK].A Poincaré type inequality with |f (x) − f B | replaced by |f (x) − f (x 0 )| for solutions to subelliptic quasilinear equations studied in the current paper has been given in [L5] for p ≥ 1 and in [BKL] for p < 1, among other things.We also became aware of the work [HH] for Harnack estimates on Carnot groups in conjunction with the quasiregular mappings, and the interior regularity for subelliptic systems [XZ], and isoperimetric inequality independently derived in [CDG2] similar to that in [FGW]. | 7,541.8 | 1996-07-01T00:00:00.000 | [
"Mathematics"
] |
Capital Investments and Manufacturing Firms’ Performance: Panel-Data Analysis
: The main goal of this study was to examine the e ff ects of capital investments on firm performance, using panel-data analysis. For this purpose, financial data were gathered for 60 manufacturing firms based in Serbia, in the period from 2004 to 2016. The main research hypotheses were developed in accordance with the definition, nature
Introduction
Today, Serbia is technologically lagging behind European industries and needs strong domestic industry to ensure economic sustainability.Serbian industry was largely devastated due to sanctions and wars in the 1990s.During that period, separated from foreign markets, it was impossible for Serbian industry to keep track of technological development.Isolation, as well as the lack of financial resources, made fixed assets technologically obsolete and poorly maintained.Foreign direct investments (FDI) are important, especially those with high technological intensity-bringing in new technologies and new knowledge, and employing domestic labor, but profitable domestic investments in fixed assets-which are not just the path to unjustified and excessive borrowing, represent one of the most important factors for abandoning a perennial economic stagnation.Figure 1 shows investments in fixed assets as percentage of gross domestic product (GDP) [1], where we can see that these investments in Serbia are mainly below European average.Capital investments are necessary for growth and economic development, which implies that growth, beside other factors, is a function of investments.However, since accumulation depends on growth, we can also say that investments are function of growth.Therefore, theoretically, there is a clear interdependence between growth and capital investments.
Sustainable development of the manufacturing firms is closely related to the selection and realization of the capital investments, or investment projects, regardless of whether it is a replacement, modernization, expansion, or some other type of investments.Moreover, sustainable manufacturing largely depends on the process of selection and realization of investment projects, since they have to be selected and implemented based on their environmental and social impact evaluation, beside the assessment of other associated risks-that can be systematized as investment, financial, organizational, technical, technological, operational and informational risk [4].As a consequence, manufactured products should use processes that minimize negative environmental impacts, should conserve energy and natural resources, should be safe for employees, communities, and consumers and should be economically sound [5].Managing physical assets and technologies, or investment-capital intensive-projects, lead to the accumulation of capabilities in the firm, associated with continuous improvement and process innovations, as well as with corporate sustainable development [6].Hence, capital investments are crucial link for manufacturing firms to create a long-term economic value, as well as to achieve sustainable development, having in mind their social and environmental impacts.
At the firm level, we can say that capital investments, in one hand, have a short-term character, since they represent the firm expense, but in the other hand, capital investments have a long-term nature, since they should bring some benefits to the firm in the future.Accordingly, the main goal of this study was to analyze this relationship between capital investments and firm performance, or more precisely, to examine the effect of capital investments on the firm performance, including both short-term and long-term aspects.Establishing the relationship between capital investments and firm performance, and confirming or disconfirming the effectiveness of capital investments, will contribute to the knowledge accumulation in this area and provide an insight for future capital investments of manufacturing firms.
The rest of the paper is organized as follows.Literature review of the relationship between capital investments and firm performance is presented in Section 2. Section 3 describes research methodology.Data analysis, including descriptive statistics, general model, and preliminary assumptions tests for panel data, is presented in Section 4. Section 5 analyzes regression results.Section 6 concludes.Capital investments, i.e., investments in fixed assets, represent an important factor that can serve as a signal in predicting the future profitability of the firm and stock returns [2].Assessing the impact of investment at the level of the firm has not always been a viable research topic because, for many years, it was hindered by the lack of observed investment data and it is only recently that scholars have started to document the nature of firms' investment behavior [3].Since most of the research regarding investment impact analysis focuses on the macroeconomic level, such as the impact of FDI on GDP growth, this paper attempts to fill the gap at the microeconomic level, i.e., the level of the firm.
Capital investments are necessary for growth and economic development, which implies that growth, beside other factors, is a function of investments.However, since accumulation depends on growth, we can also say that investments are function of growth.Therefore, theoretically, there is a clear interdependence between growth and capital investments.
Sustainable development of the manufacturing firms is closely related to the selection and realization of the capital investments, or investment projects, regardless of whether it is a replacement, modernization, expansion, or some other type of investments.Moreover, sustainable manufacturing largely depends on the process of selection and realization of investment projects, since they have to be selected and implemented based on their environmental and social impact evaluation, beside the assessment of other associated risks-that can be systematized as investment, financial, organizational, technical, technological, operational and informational risk [4].As a consequence, manufactured products should use processes that minimize negative environmental impacts, should conserve energy and natural resources, should be safe for employees, communities, and consumers and should be economically sound [5].Managing physical assets and technologies, or investment-capital intensive-projects, lead to the accumulation of capabilities in the firm, associated with continuous improvement and process innovations, as well as with corporate sustainable development [6].Hence, capital investments are crucial link for manufacturing firms to create a long-term economic value, as well as to achieve sustainable development, having in mind their social and environmental impacts.
At the firm level, we can say that capital investments, in one hand, have a short-term character, since they represent the firm expense, but in the other hand, capital investments have a long-term nature, since they should bring some benefits to the firm in the future.Accordingly, the main goal of this study was to analyze this relationship between capital investments and firm performance, or more precisely, to examine the effect of capital investments on the firm performance, including both short-term and long-term aspects.Establishing the relationship between capital investments and firm performance, and confirming or disconfirming the effectiveness of capital investments, will contribute to the knowledge accumulation in this area and provide an insight for future capital investments of manufacturing firms.
The rest of the paper is organized as follows.Literature review of the relationship between capital investments and firm performance is presented in Section 2. Section 3 describes research methodology.Data analysis, including descriptive statistics, general model, and preliminary assumptions tests for panel data, is presented in Section 4. Section 5 analyzes regression results.Section 6 concludes.
Literature Review
There is a certain number of studies that examine the relationship between firm performance, using different measures of performance, and capital investments, while employing different statistical tests and econometric approaches.The findings are divided, resulting in negative or positive relationship between capital investments and firm performance.Our empirical expectation is that there is a positive relationship between capital investments and firm performance because of the definition, nature, and time aspect of capital investments-although they probably bring losses to the firm in the short term, they should increase the firm performance in the long term.
Power [7], on the case of US manufacturing firms, founds no evidence of a strong positive relationship between productivity and tangible investments which cautions against the efficacy of fiscal policy that is based on the premise that investment causes high productivity.Author, also, concludes that reason for the weak relationship between productivity and investment is that higher productivity is simply not the primary motivation for investments and quotes Grabowski and Mueller [8] that overinvestment, poor-quality investments, and low productivity can result if managers are maximizing their own utility rather than firm profits.Nilsen et al. [9], on the case of Norwegian firms, while examining the relationship between productivity and investments in fixed assets, found that productivity improvements are not related to these investments, more precisely they found significant effect of tangible investment on productivity, but this effect vanishes over time.Shima [10] investigates the impact of capital investments on productivity at the firm level using data of Japanese manufacturing industries and find also a negative relationship which, according to the author, predicts that firms face sunk costs.
Titman et al. [11] showed that US non-financial firms with substantially increase in capital investments subsequently achieve negative benchmark-adjusted returns and that the negative capital investments-return relationship is stronger for firms with higher cash flows and/or lower debt ratios, which probably have a greater tendency to overinvest.Jovanovic et al. [12], again on the case of US manufacturing firms, found that capital investments of established firms-"intensive" investments-are negatively related with Tobin's Q, compared to the new firms, because a high Q is a signal of low compatibility of old capital with the new and, hence, of high implementation costs specific to incumbents.According to Yao et al. [13] there is a pervasive negative relationship between asset growth and subsequent stock returns of Asian firms, suggesting potential inefficiencies of the region's financial systems in allocating capitals and valuing investment opportunities.Using data from 624 firms in the United States, Sircar et al. [14], found that both IT and corporate investments have a strong positive relationship with sales, assets, and equity, but not with net income.Singh et al. [15], using data from 120 firms in 30 countries, showed that environmental technology investments have a negative impact on profitability, i.e., return on assets, through pollution prevention capability and that firms should relocate their environmental expenditures to enhance firms' economic performance.
Aktas et al. [16], while examining the relationship between working capital management and firm performance, on the case of US firms, found that fixed asset growth is negatively associated with firm performance, measured by return on assets, and also statistically insignificant.Similar, Alipour et al. [17], while examining the relationship between working capital management and firm performance, on the case of UK firms, showed that tangible fixed assets have a negative and statistically significant impact on their return on assets.Jindrichovska et al. [18], on the case of 260 Czech firms, also found a negative relationship between growth of tangible assets, as a share of total assets, and return on assets of those firms.Fernández-Rodríguez et al. [19], while examining the influence of ownership structure on tax rates of Spanish firms, found that growth of fixed assets, expressed by capital intensity, has a negative and significant relationship with effective tax rate of the state owned companies, expressed as a relationship between tax expense and pretax income.Aljinović Barać and Muminović [20], on the case of dairy processing industry in Slovenia, Croatia, and Serbia found that companies with higher level of capital investments per employee obtain lower financial performance, expressed by return on assets, and that possible explanation for that can be found in the time lag between the moment of investment and the moment in the future when investment will generate the profit.
On the other hand, Grazzi et al. [3], on the case of French and Italian manufacturing firm-level data, using econometric approach that allows disentangling of the repair and maintenance episodes from large tangible investments, and after controlling for firm characteristics, found that tangible investments are associated with higher productivity, profitability, and employment.Ching-Hai et al. [21], while examining relationship between capital expenditures and corporate earnings of manufacturing firms listed on the Taiwan Stock Exchange, and after controlling for current corporate earnings, found a significantly positive association between capital expenditures and future corporate earnings.Aw et al. [22], also on the sample of Taiwanese electronics producers, found that firm future profitability is improved by investments in both R&D and physical capital.Gradzewicz [23] showed that productivity of Polish firms falls after investment and slowly recovers thereafter, which is consistent with learning-by-doing effects, and that investments are also associated with subsequent significant sales increase.Namiotko et al. [24], on the case of Lithuanian farms, found that the farms showed lower inefficiency in the presence of the investment spikes, which indicates that the farms, operating in the region of increasing returns to scale, could increase productivity by increasing their inputs and investments.
Fama and French [25] studied the relationship between firm investments and profitability for the aggregate non-financial US corporations and found that corporate investments lead to higher profitability.Yu et al. [26], while examining China's manufacturing firm-level dataset, showed that the only visible profitability-growth relationship is mediated via capital investments and that capital investments have a positive and significant effect on firms' productivity, both in levels and growth rates, and the effect on sales growth is even bigger.Lööf and Heshmati [27] examined the relationship between performance and tangible, as well as R&D investments of Swedish firms, and found that profitability is strongly associated with physical investments, but not with R&D investments.Johansson and Lööf [28], also on the case of Swedish manufacturing firms, found that the impact of physical capital (investment) on profitability is significant, positive, and systematically larger than for the comparable labor productivity estimations.
Licandro et al. [29] in their study showed that sales and productivity of innovative Spanish firms rise as a result of large tangible investment episodes and, hence, that they substantially improve their market shares, which is not the case for the non-innovative firms.Kapelko et al. [30], also on the case of Spanish manufacturing firms, found that capital investments produce a significant productivity change loss in the first year after investment, but thereafter productivity improves, resulting in the U-shape pattern of relationship.Amoroso et al. [31], on the case of EU firms, while making distinction between R&D and physical investments, showed that both R&D and physical investments, have a positive effect on the performance, expressed by the operating profit, and that larger firms also get higher returns in the presence of risk.Curtis et al. [32], using financial data on mergers and acquisitions, found that capital expenditures, as well as R&D expenditures, have a positive effect on the net profit and future earnings volatility of analyzed firms.Taipi and Ballkoci [33], on a sample of 30 construction firms in Albania, showed that capital investments have a positive effect on their future profitability, expressed by return on assets.Sudiyatno et al. [34] and Pandya [35] also found that capital investments have a positive effect on profitability, i.e., return on assets, on the case of manufacturing companies in Indonesia and infrastructure companies in India, respectively.
There are many researchers who examined the relationship between technology investments and performance, such as, for example, Mithas et al. [36], who found, on a sample of 400 global firms, that technology investments have a positive impact on revenue growth and profitability.Similar, Arvanitis et al. [37], on the case of Swiss firms, investigated the effects of energy-related technologies on economic performance at firm level and found a positive direct effect of investment expenditures for energy-related technologies on labor productivity, and a positive indirect effect of energy taxes via investment in energy-related technologies.Also, Bostian et al. [38], using plant-level production data for Swedish manufacturing firms, showed that environmental technology investments have a positive effect on the firm performance, measured by the productivity changes.Lee et al. [39], on the case of Korean biotechnology firms, while examining the relationship between R&D intensity and firm value, found that total asset investments and asset tangibility have a positive effect on the firm value, measured by Tobin's Q.
Although the literature covers a wide variety of firm performance measures, mainly expressed through productivity and profitability, this study will focus on profitability as a final component of the chain: capital investments-improved productivity-increased profitability.Tables 1 and 2 summarize literature review.
Methodology and Hypotheses Development
Capital investment, or investment project, theoretically can be defined as a series of cash inflows and outflows, which typically begins with cash outflows (initial investment), followed by cash inflows and/or cash outflows in subsequent years of the project [40], or simply as a series of outflows that can bring some inflows in the future.Capital investment in a manufacturing firm can be realized in one year, or in more than one year, in case of large projects, while benefits are usually collected through several upcoming years after realization.In accordance with this long-term nature of capital investments, theoretical definition, and also assumption that capital investments from previous year should affect the firm performance in the next year, we can say that capital investments can have negative effect on firm performance in the short term (during the one year, i.e., the year of investment), but they should have positive effect on firm performance in the long term (year after investment).Accordingly, we can define our main research hypotheses as follows: Hypothesis 1: Capital investments have a negative effect on the short-term performance of manufacturing firms.
Hypothesis 2: Capital investments have a positive effect on the long-term performance of manufacturing firms.
For this study we have chosen manufacturing firms, i.e., capital intensive firms that require large amount of capital investments to produce goods.Sixty manufacturing firms from Serbia, selected based on financial data availability, with historical data from 2004-2016 (total of 600 available observations with some missing data), were analyzed.As a proxy for firm performance we have used profitability-growth, expressed by ROA (Return on Assets), since ROA, according to Hagel et al. [41], represent better metric of financial performance than income statement profitability measures-it takes into account the assets used to support business activities and determines whether the company is able to generate an adequate return on these assets rather than simply showing robust return on sales.As for capital investments, as a proxy we have used capital investment rate.
There are several procedures for choosing the lag length in finite distributed lag models (in which the effect of a regressor X on Y occurs over time rather than all at once), but there is no perfect answer which lag length to choose, especially in panel models.Having in mind the statistical problems that can occur, especially in short panels (less than 20-30 years), such as multicollinearity and sample reduction (each time we lengthen the lag by one period, we lose two degrees of freedom), we have chosen one year as a lag length of the regressor.More precisely, for the purpose of this study, to capture the long-term effect of capital investments on performance, we have used one-year lag of capital investment rate.
While examining the relationship between these growth rates, we have controlled time-fixed effects using year dummies, and certain internal factors, such as firm size, leverage, total asset turnover, and asset tangibility.Financial data for the manufacturing firms were obtained from the Serbian Business Registers Agency database [42].
Panel data describes the behavior of individuals/entities, both across individuals/entities and over time-they have both cross-sectional and time-series dimensions.Panel data can be balanced when all individuals/entities are observed in all time periods or, as in our case, unbalanced when individuals/entities are not observed in all time periods, i.e., there are missing data points because of the occasional panel attrition.The main three types of panel-data models are pooled Ordinary Least Squares (OLS) model (assumes constant coefficients), fixed effects model (assumes that the individual specific effects are correlated with the regressors), and random effects model (assumes that the individual specific effects are not correlated with the regressors).To choose appropriate panel model, Hausman test and Breusch-Pagan Lagrange multiplier have been employed, as well as appropriate tests regarding assumptions of serial correlation, heteroscedasticity, and cross-sectional dependence presence in analyzed panel data.
Descriptive Statistics
Results for overall descriptive statistics, as well as descriptive statistics with decomposition in between and within standard deviation, for main variables of interest, firm performance expressed with ROA and firm capital investments CI, are presented in Table 3.On one hand, Table 3 shows overall descriptive statistics of main variables, where we can see that 60 firms (ID), analyzed over period from 2004-2016, have mean ROA (−0.054) with standard deviation (1.087) and mean CI (0.093) with standard deviation (0.595).On the other hand, Table 3 shows descriptive statistics with decomposition in between firms and within firms over time variation.First, it is obviously that firm (ID) does not vary over time (Year), but since we have unbalanced panel, we can see that time (Year) do vary between firms (ID).The interesting part is that both variables, ROA and CI for manufacturing firms, have more variation within firms over time (1.037 and 0.559, respectively for ROA and CI), than variation between firms (0.441 and 0.204, respectively for ROA and CI).Also, in Table 3 we can see overall, between and within descriptive statistics for all control variables included in the model.Figure 2 shows bar charts of mean ROA and CI, grouped by firms and by time, and, if we look at the graphs (b) and (d), we already have an indication that CI will probably have a negative effect on the short-term ROA.
with standard deviation (0.595).On the other hand, Table 3 shows descriptive statistics with decomposition in between firms and within firms over time variation.First, it is obviously that firm (ID) does not vary over time (Year), but since we have unbalanced panel, we can see that time (Year) do vary between firms (ID).The interesting part is that both variables, ROA and CI for manufacturing firms, have more variation within firms over time (1.037 and 0.559, respectively for ROA and CI), than variation between firms (0.441 and 0.204, respectively for ROA and CI).Also, in Table 3 we can see overall, between and within descriptive statistics for all control variables included in the model.Figure 2 shows bar charts of mean ROA and CI, grouped by firms and by time, and, if we look at the graphs (b) and (d), we already have an indication that CI will probably have a negative effect on the short-term ROA.
Pooled OLS vs Fixed Effects vs Random Effects
First, we must choose between three types of panel models: the pooled OLS model, the fixed effects model (FE), and the random effects model (RE).To choose between these model types, first step would be to decide between FE and RE model using Hausman test.If we conclude that FE model is the better one, than we can use F-test for fixed effects to decide between FE and OLS model.However, if we conclude that RE is the better one, than we should choose between RE and OLS model, using Breusch-Pagan Lagrange multiplier (LM).
Hence, to decide between FE and RE model, first we should perform Hausman test (Appendix A, Table A1).Table A1, Appendix A, shows that Prob > chi2 (0.473) is higher than 0.05, so we cannot reject the null hypothesis, which implies that there is no correlation between the error term and the regressors, and we can conclude that RE is preferred model.
Given that Hausman test showed that between FE and RE model, RE model is the better one, now we need to compare RE and OLS model.We will do this by employing Breusch-Pagan LM for random effects testing shown in Appendix A, Table A2.Table A2, Appendix A, shows that Prob > chibar2 (1.000) is higher than 0.05, so we cannot reject the null hypothesis, which implies that the variances across firms are zero, and we can conclude that pooled OLS is preferred model.
In accordance with this model selection procedure, the rest of the paper will focus on the pooled OLS as preferred model.However, we will also present the results for fixed effects and random effects regressions.These results can be found in Appendix B, Tables A5 and A6, respectively.
General Model
The estimating equation for pooled OLS model can be presented as follows: In Equation ( 1), Y it is the dependent variable of entity (i) in time (t), α is the constant intercept, X it represents independent and / or control variable of entity (i) in time (t), β is the coefficient for that variable and µ it is the error term.
Equation for capital investment, measured as capital investment rate, can be presented as follows: In Equation ( 2), CI it is capital investment rate of firm (i) in year (t), I it is investment of firm (i) in year (t), or flow variable, and K i,t-1 is tangible fixed assets of firm (i) at the end of the previous year, or stock variable.
The empirical model, summarized by Equations ( 1) and ( 2), is shown in Equation ( 3), where ROA it is the dependent variable that represent performance measure Return on Assets, calculated as ratio of Net Income and Total Assets of firm (i) during year (t), α represents the constant, CI it is the first independent variable that represent Capital Investment, calculated as difference between firms' Fixed Assets in year (t) and Fixed Assets in year (t-1), divided by Fixed Assets in year (t-1), CI_LAG it is the second independent variable that represent one year lagged CI variable for firm (i), used for capturing the effect of capital investments on firm performance in the long term, SIZE it is the first internal control variable that represent firm's size, calculated as a Natural Log of Total Assets of firm (i) in year (t), LEV it is the second internal control variable that represent firm's Leverage, calculated as a Debt to Equity ratio of firm (i) in year (t), TAT it is the third internal control variable that represent firm's Total Asset Turnover, calculated as Net Sales to Total Assets ratio of firm (i) in year (t), TANG is the fourth internal control variable that represent firm's Asset Tangibility, calculated as Fixed Assets to Total Assets ratio of firm (i) in year (t), YEAR t is the year dummy variable, used for controlling time-fixed effects, and µ it is the standard error term in regression analysis.To ensure normality of the data, natural logs have been used for all variables.Table 4 summarizes model variables and their calculations.
Pooled OLS Preliminary Assumptions
For pooled OLS model to be accurate, there are some assumptions which needs to be tested.The most important ones are no serial correlation, homoscedasticity, and no cross-sectional dependence.
In longitudinal data, subjects are measured repeatedly over time and repeated measurements of a subject tend to be related to one another [43].Because serial correlation in linear panel-data models biases the standard errors and causes the results to be less efficient, researchers need to identify serial correlation in the idiosyncratic error term in a panel-data model, and a Wooldridge test is very attractive because it requires relatively few assumptions and is easy to implement [44].Serial correlation refers to the situation in which residuals are correlated across time and ignoring serial correlation where it exists, causes consistent but inefficient estimates, biased standard errors and inference about the significance of regressors may be incorrect under serial correlation conditions [45].Even though the serial correlation can be ignored in short panels (less than 20-30 years), we will employ Wooldridge test which examine the serial correlation in panel data.These results are presented in Table A3, Appendix A.
When fitting regression models to data, an important assumption is that the variability is common among all observations, which is called homoscedasticity (this means "same scatter", or constant variance), but when the scatter varies by observation, the data are said to be heteroscedastic, which affects the efficiency of the regression coefficient estimators although these estimators remain unbiased even in the presence of heteroscedasticity [43].Homoscedastic assumption that the variability is common among all observations, according to Baltagi [46], may be a restrictive assumption for data panels, where cross-sectional units may often be a different size and as a result exhibit different variations.Assuming homoscedastic disturbances when heteroscedasticity is present will yield consistent estimation results of coefficients that are not efficient, the standard errors of the estimates will be biased and the inference about the significance of regressors may be incorrect [45].To test for heteroscedasticity in panel data, we will use Breusch-Pagan / Cook-Weisberg test.The results regarding heteroscedasticity are also presented in Table A3, Appendix A.
In Table A3, Appendix A, regarding serial correlation, we can see that the Prob > F (0.045) is less than 0.05, so we can reject the null hypothesis and conclude that there is a presence of serial correlation in our panel data.Also, in Table A3, Appendix A, regarding heteroscedasticity, we can see that the Prob > chi2 (0.0016) is less than 0.05, so again we can reject the null hypothesis and conclude that there is presence of heteroscedasticity in our panel data.
A growing body of the panel-data literature comes to the conclusion that panel datasets are likely to exhibit substantial cross-sectional dependence, which may arise due to the presence of common shocks and unobserved components that become part of the error term ultimately, due to spatial dependence, as well as due to idiosyncratic pair-wise dependence in the disturbances with no particular pattern of common components or spatial dependence [47].Ignoring cross-sectional dependence may affect the first-order properties (unbiasedness, consistency) of standard panel estimators and even if the first-order properties of these estimators remain unaffected, the presence of error cross-sectional dependence may largely reduce the extent to which they can provide efficiency gains over estimating using, say, OLS for each individual [48].According to De Hoyos and Sarafidis [47], if assume standard panel model, under the null hypothesis µ it is assumed to be independent and identically distributed over time periods and across cross-sectional units, and under the alternative µ it may be correlated across cross-sections but the assumption of no serial correlation remains.To test cross-sectional dependence in panel data, we will employ CD-test, as described in Pesaran [49] and Pesaran [50] for a varlist of any length.The results are presented in Table A4, Appendix A.
In Table A4, Appendix A, regarding cross-sectional dependence, we can see that the p-values are less than 0.05, so we can reject the null hypothesis and we can conclude that there is presence of cross-sectional dependence in our panel data.
Results and Discussion
Since all three assumptions for pooled OLS model to be accurate (no serial correlation, homoscedasticity, and no cross-sectional dependence) are violated, we will use robust standard errors (Driscoll-Kraay standard errors), which will help us to deal with these violations.
According to Hoechle [51], in order to ensure valid statistical inference when some of the underlying regression model's assumptions are violated, it is common to rely on robust standard errors.Following Hoechle [51], we will run pooled OLS regression with Driscoll-Kraay standard errors, which deals with heteroscedasticity, autocorrelation, and cross-sectional dependence assumptions violations.
Regression is performed according to the general model equation described in Section 4.3, as well as preliminary performed Hausman test, Breusch-Pagan LM, serial correlation, heteroscedasticity, and cross-sectional dependence test.Table 5 shows the results from hierarchical pooled OLS regression testing hypotheses that capital investments probably have negative effect on the firm performance in the short term, but positive effect on the firm performance in the long term, while controlling for time-fixed effects and certain internal factors, such as firm size, leverage, total asset turnover, and asset tangibility.We will now present the results concerning our four models.
Result 1: Model 1 provides evidence that CI has a negative, statistically insignificant, effect on ROA (−0.272) and CI_LAG has a positive, statistically significant, effect on ROA (0.101), at level 0.001.This model confirms our second hypothesis and indicates that for certain increase in capital investments, ROA of manufacturing firms is expected to increase in the year after investment, while holding firm size constant.We can see that firm size (SIZE), as a control variable, has a negative effect on ROA (−0.044) and statistically significant, at level 0.001.Larger firms should have a larger profitability, since they can achieve lower cost per unit, which is in line with the economies of scale.Although most studies reports a positive relationship between firm size and profitability, there are also researchers who found a negative relationship between these two variables (see, e.g., Kartikasari et al. [52]).According to Pervan et al. [53], a conceptual framework that advocates a negative relationship between firm size and profitability is noted in the alternative theories of the firm, which suggest that large firms come under the control of managers pursuing self-interested goals and therefore profit maximization as the firm's objective function may be replaced by managerial utility maximization function.This can imply that managers of large manufacturing firms in Serbia put their own goals in front of the companies' goals.
Result 2: Model 2 provides evidence that CI has a negative effect on ROA (−0.815) and CI_LAG has a positive effect on ROA (0.086), with statistically significant coefficients, at level 0.001.This model confirms our both hypotheses and indicates that for certain increase in capital investments, ROA of manufacturing firms is expected to decrease in the year of investment, but to increase in the year after investment, while holding firm size and leverage constant.Also, leverage (LEV), as additional control variable, has a negative effect on ROA (−0.020).This can be considered to be a reasonable result, since it is clear that greater borrowing implies a reduction of profitability.Obviously, this rule also applies to manufacturing firms in Serbia.There are many researchers who have examined capital structure and firms' performance and found a negative relationship between leverage and profitability (see, e.g., Ahmad et al. [54]).However, in our case the effect of leverage on profitability remain statistically insignificant.We can see that in this particular model, the effect of firm size, as a control variable, on profitability is still negative (−0.046) and statistically significant, at level 0.01.Moreover, in this particular model, the intercept is also statistically significant, at level 0.01.
Result 3: Model 3 provides evidence that CI has a negative effect on ROA (−0.826) and CI_LAG has a positive effect on ROA (0.083), again with statistically significant coefficients, at level 0.001.This model also confirms our both hypotheses and indicates that for certain increase in capital investments, ROA of manufacturing firms is expected to decrease in the year of investment, but to increase in the year after investment, while holding firm size, leverage, and total asset turnover constant.We can see that total asset turnover (TAT), as additional control variable, has a positive effect on ROA (0.928) and statistically significant, at level 0.05.This also can be considered to be a reasonable result, since total asset turnover, as an efficiency measure, shows how firm uses its assets in generating revenue, where higher value of this ratio indicates a better managing of firm's assets.This can imply that manufacturing firms in Serbia manage their assets effectively.A positive relationship between total asset turnover and profitability has been proven in many studies (see, e.g., Dencic-Mihajlov [55]).In this particular model, the effect of firm size and leverage, as control variables, is statistically insignificant and still negative.
Result 4: Model 4 provides evidence that CI has a negative effect on ROA (−0.810) and CI_LAG has a positive effect on ROA (0.081), also with statistically significant coefficients, at level 0.001.This model again confirms our both hypotheses and indicates that for certain increase in capital investments, ROA of manufacturing firms is expected to decrease in the year of investment, but to increase in the year after investment, while holding firm size, leverage, total asset turnover, and asset tangibility constant.Tangibility (TANG), as additional control variable, has a positive effect on ROA (0.209) and statistically significant, at level 0.05.Although there are divided opinions how tangibility should affect profitability, considering the negative effects of amortization costs, this result can be reasonable, since, according to Bhuta et al. [56], firm with large amount of fixed asset tends to be more profitable because of increasing its future assets value.Similarly, Al-Jafari et al. [57], in their study found a positive relationship between tangibility and profitability.Moreover, we can see that in this particular model, the effect of total asset turnover, as a control variable, on profitability is still positive (1.969) and statistically significant, at level 0.05.The effect of firm size and leverage, as control variables, remain negative and statistically insignificant.
In addition to these regression results, we can also see that R-sqr becomes higher by adding a control variable, from 0.067 to 0.114, but still remain very low.This does not instantly indicate that model is not good, but rather that a predictability of the model is low, which is a very common, especially in the cases where is hard to predict behavior of some entities, as in social and economic areas.According to Kutner et al. [58], there are three misunderstandings regarding R-sqr: (1) high coefficient of determination indicates that useful predictions can be made (arises because R-sqr measures only a relative reduction from the total variation and provides no information about absolute precision for estimating a mean response or predicting a new observation), (2) high coefficient of determination indicates that the estimated regression line is a good fit, and (3) low coefficient of determination indicates that X and Y are not related (the last two arise because R-sqr measures the degree of linear association between X and Y, whereas the actual regression relationship may be curvilinear).
To summarize, all four models formed by hierarchical regression (adding internal control variables, one by one) and presented in Table 5 confirm our main research hypotheses, but with different statistical significance results.More precisely, the results in Table 5 provide evidence that for certain increase in capital investments, ROA of manufacturing firms is expected to decrease in the year of investment, but to increase in the year after investment, while holding firm size, leverage, total asset turnover, and asset tangibility constant, and controlling for time-fixed effects as well.Our results support the findings from researchers such as Taipi and Ballkoci [33], Sudiyatno et al. [34], Pandya [35], who found a positive effect of capital investments on profitability, measured by return on assets.Moreover, our results support and complement the findings from Aljinović Barać and Muminović [20], who found, on the case of dairy processing industry in Slovenia, Croatia, and Serbia, a negative effect of capital investments on the short-term profitability, also expressed by return on assets, for which, according to the authors, a possible explanation can be found in the time lag between the moment of investment and the moment in the future when investment will generate the profit.Although other researchers have used different measures of profitability in their studies, we can say that in general, the results of this study also support the findings from, for example, Grazzi et al. [3], Aw et al. [22], Fama and French [25], Yu et al. [26], Lööf and Heshmati [27], Johansson and Lööf [28], Amoroso et al. [31], Curtis et al. [32], who found a positive relationship between capital investments and profitability.
Conclusions
The findings of this study confirmed our main research hypotheses and empirical expectation that relationship between capital investments and firm performance should be positive because of the definition, nature, and time aspect of capital investments-they probably bring losses to the firm in the short term, but they should increase firm performance in the long term.Accordingly, we have indeed shown that capital investments have a negative effect on the firm performance in the short term, but positive effect on the firm performance in the long term.More precisely, we found, in our panel data set, using pooled OLS regression, statistically significant effect of capital investments on firm performance, measured by return on assets, and considering both short-term and long-term aspects, while controlling for time-fixed effects and certain internal factors, such as firm size, leverage, total asset turnover, and asset tangibility.
The results of this research can contribute to the benefit of manufacturing firms, considering that capital investments have an important role in their sustainable development.They can be used by managers of manufacturing firms as a helpful tool while making strategic and investment decisions.These results also support the general fiscal policy which assumes that capital investments have a central role in stimulating growth and that capital investments causes better performance.Generally, the implication of our research is that the state governments, and especially government in Serbia where all analyzed firms are from, should encourage and support capital investment activities to ensure economic sustainability, while manufacturing firms, especially in Serbia, should invest more in sustainable production projects-which should be profitable, not just the path to insolvent borrowing.
This research, however, has some limitations.First, because of the lack of data, this study does not include factors such as, for example, particular type of manufacturing industry, state, or private ownership of the firm, exporter or importer firm, or other firm characteristics, which would help us to understand the relationship between capital investments and firm performance in a more comprehensive way.Second, the measurement of capital investment should also include amortization costs, but again, lack of data prevented inclusion of this component.Since this topic is, in general, poorly covered by the literature concerning the regional aspects, it could also be interesting to expand the research and see how capital investments, in interaction with geographical characteristics, affect firms' performance in different regions and possibly over a longer period of time.However, most of these factors require a larger amount of data, which lead us to the sample size as a third limitation of this study.The results could be affected by sample size, and a larger one would, surely, decrease the likelihood of skewing the results, which would increase the power of the study.Nevertheless, these limitations can be a solid ground for the future research directions.
Figure 2 .
Figure 2. Bar charts of mean ROA and CI, by 60 firms and over time (2004-2016): (a) Mean ROA by firms; (b) Mean ROA over time; (c) Mean CI by firms; (d) Mean CI over time.
Figure 2 .
Figure 2. Bar charts of mean ROA and CI, by 60 firms and over time (2004-2016): (a) Mean ROA by firms; (b) Mean ROA over time; (c) Mean CI by firms; (d) Mean CI over time.
Table 1 .
Summary of literature review-part I.
Table 2 .
Summary of literature review-part II.
Table 4 .
Summary of model variables.
Table A6 .
Random effects regression results.Random effects Generalized Least Squares (GLS) regression with clustered standard errors.Four models formed by hierarchical regression (adding internal control variables, one by one).ROA as dependent variable.Standard errors in parentheses.RE coefficients match with pooled OLS coefficients because "rho=0" (variability is mainly within firms, not between firms).Significance levels: * p < 0.05, ** p < 0.01, *** p < 0.001. Notes: | 9,832.2 | 2020-02-24T00:00:00.000 | [
"Economics",
"Business"
] |
Tissue culture and next-generation sequencing: A combined approach for detecting yam (Dioscorea spp.) viruses
In vitro culture offers many advantages for yam germplasm conservation, propagation and international distribution. However, low virus titres in the generated tissues pose a challenge for reliable virus detection, which makes it difficult to ensure that planting material is virus-free. In this study, we evaluated next-generation sequencing (NGS) for virus detection following yam propagation using a robust tissue culture methodology. We detected and assembled the genomes of novel isolates of already characterised viral species of the genera Badnavirus and Potyvirus, confirming the utility of NGS in diagnosing yam viruses and contributing towards the safe distribution of germplasm.
Introduction
Yam (Dioscorea spp. of family Dioscoreaceae) is a multi-species crop that generally produces large, starchy tubers used as a popular food staple in Africa and Asia. In West and Central Africa, yams play a principal role in food and nutrition security and income generation for more than 60 million people and are important in cultural life [1][2][3][4]. The major cultivated yam species globally are D. alata, D. bulbifera, D. cayenensis, D. esculenta, D. opposita-japonica, D. nummularia, D. pentaphylla, D. rotundata, and D. trifida [5]. The species D. cayenensis and D. rotundata are indigenous to West Africa, where they are the two most important yam species in terms of yield produced. In contrast, D. alata is of Asiatic origin and is the most globally widespread species of yam [1]. Yam is mainly cultivated by smallholder farmers, and the 'yam belt' stretching across Benin, Ivory Coast, Ghana, Nigeria, and Togo in West Africa is the world's dominant zone for yam production. According to reports of the International Institute of Tropical Agriculture (IITA), the demand for this food security crop is always higher than the actual supply and, with an increasing population, that trend is expected to continue [1].
Yams are annual or perennial vines and climbers with underground tubers [6]. Cultivated yams are generally propagated vegetatively using their tubers, which leads to the perpetuation and accumulation of tuber-borne pathogens, particularly viruses [7]. Virus species belonging to at least six different genera infect yams in West Africa [7][8][9], causing severe impacts on tuber yield and quality as well as impeding yam germplasm movement. Yam mosaic virus (YMV; genus Potyvirus), Yam mild mosaic virus (YMMV; genus Potyvirus), Cucumber mosaic virus (CMV; genus Cucumovirus), and several species of Dioscorea-infecting badnaviruses have been reported to be widespread across the 'yam belt' in West Africa [10][11][12][13][14]; YMV is often described as the most economically important of these. The first and only complete YMV genome (an Ivory Coast isolate) was reported by Aleman et al. [15] in 1996. YMV was first identified in D. cayenensis by Thouvenel and Fauquet in 1979 [16] and has a single-stranded, positive-sense RNA genome of 9608 nucleotides in length that is encapsidated in flexuous filamentous particles. YMV is transmitted horizontally by aphids in a non-persistent manner as well as by mechanical inoculation. It is also transmitted vertically by vegetative propagation of infected plant material [15,17]. YMV infection is associated with a range of symptoms, including mosaic, mottling, green vein banding, leaf deformation, and stunted growth, leading to reduced tuber yield.
Badnaviruses are plant pararetroviruses (family Caulimoviridae, genus Badnavirus) that have emerged as serious pathogens infecting a wide range of tropical and subtropical crops; these include banana, black pepper, cacao, citrus, sugarcane, taro, and yam [18]. Badnaviruses have bacilliform-shaped virions that are uniformly 30 nm in width, have a modal particle length of 130 nm, and contain a single molecule of non-covalently closed circular double-stranded DNA in the range of 7.2-9.2 kbp with each strand of the genome having a single discontinuity [19]. Badnavirus replication involves the transcription of a single, greater-than-genome length, terminally redundant pregenomic RNA, which serves as a polycistronic mRNA for translation of the genome's three open reading frames (ORFs) and is used as the template for DNA synthesis in the cytoplasm [19]. Badnaviruses transport their DNA into the host nucleus for transcription, and random integration of the viral DNA into the host genome may occur through illegitimate recombination or during the repair of DNA breaks [20,21]. The genus Badnavirus is the most diverse within the family Caulimoviridae, and the genetic and serological diversity of its members, along with the occurrence of integrated viral counterparts termed endogenous pararetroviruses (EPRV) in the genomes of its hosts, complicate the development of reliable diagnostic tools based on DNA detection [22][23][24][25].
The only effective method of controlling the above viral diseases is to use virus-free ('clean') planting material. The scarcity and associated high expense of such material has been identified as one of the most important factors limiting yam production in West Africa [3]. Yam production has historically been hindered by the low rate of multiplication achieved by conventional yam propagation methods (e.g. seed tubers), which are slow and inadequate for rapid multiplication [38]. Plant tissue culture techniques have the potential to overcome some limitations of conventional propagation methods in yams. Studies by Aighewi et al. and IITA showed that aeroponics and temporary immersion bioreactor systems (TIBs) produce improved multiplication rates and higher-quality planting material compared with techniques using ware and seed tubers (including the minisett technique) or vine cuttings [39,40]. These in vitro culture techniques can potentially deliver high-quality, clean, clonal plant material and may therefore represent a sustainable solution for the rapid production of pathogen-free planting material [39,41].
Yam tissue culture is currently used in the exchange of genetic material between countries, and in scientific research, such as rapid increase of planting material for phenotyping to various biotic and abiotic stresses, in the efficient transformation of yam lines, for the production of virus-free yam lines, and other applications. Techniques and applications for the in vitro propagation of members of the genus Dioscorea have been widely researched [38,[41][42][43][44][45][46][47], and revealed that in vitro propagation and virus indexing for the two most important yam species, D. alata and D. rotundata, still need improvements.
Several serological and nucleic acid-based methods, such as enzyme-linked immunosorbent assay (ELISA), immunocapture reverse transcription-PCR (IC-RT-PCR), RT-PCR, reverse-transcription recombinase polymerase amplification (RT-RPA), closed-tube reverse transcription loop-mediated isothermal (CT-RT-LAMP), and rolling circle amplification (RCA), have been used in indexing known yam viruses and also to characterise new yam potyviruses and badnaviruses [48][49][50][51][52][53]. Next-generation sequencing (NGS) methods are increasingly being employed in the discovery and sequencing of new plant viral genomes [54,55]. Whereas established plant pathogen diagnostic strategies such as ELISA and PCR target specific species, the massively parallel approaches of NGS generate high-throughput data that can be directly analysed for both known and unknown pathogens without the need for prior knowledge of the target sequences [54]. Consequently, NGS has potential as a robust and sensitive detection method for confirmation of virus-free material. However, in their review, Blawid et al. [54] point out that it is necessary to establish sensitive and robust assembly pipelines targeting small viral genomes and ones characterised by low identities to known viral sequences.
Yam is still an understudied 'orphan' crop that demands much more research attention. NGS and bioinformatics tools promise to help fill the knowledge gap around yam genomics and yam viral pathogens. Tamiru et al. [56] recently reported the whole genome sequencing of D. rotundata; this will serve as a springboard towards gene discovery and ultimately genetic improvement of this neglected staple crop. In this study, we describe a method for identifying infected planting material using the combination of robust in vitro propagation of D. alata and D. rotundata and NGS-based virus detection in yam tissue culture using Illumina HiSeq4000 RNA sequencing.
Plant material
Yam breeding lines and landraces of D. alata (n = 2) and D. rotundata (n = 6) used in this study were provided by the IITA (Ibadan, Nigeria). Tubers were known to be infected by YMV and badnaviruses as tested by conventional RT-PCR and PCR at IITA using generic primers respectively, but the precise status of species and occurrence of any other virus was not known. Tubers were grown in a quarantine aphid-proof glasshouse at the Natural Resources Institute (NRI, Chatham, UK), as described by Mumford and Seal [49]. Actively growing plants of the D. rotundata breeding lines (TDr 00/00515, TDr 00/00168, and TDr 89/02665) and landraces (Nwopoko and Pepa), and the D. alata breeding lines (TDa 95/310 and TDa 99/00240) ( Fig. 1), were used as a source of explant material for in vitro propagation experiments. D. rotundata landrace (cv. Makakusa) from Nigeria showing viral symptoms was chosen for the experiments involving NGS-based virus discovery.
Yam in vitro culture
Vine cuttings from a single plant of each genotype, usually containing one to three nodes, were trimmed to 5-8 cm and leaves removed. Each cutting was placed in a 1-l bottle half-filled with tap water. The cuttings were washed twice with tap water through vigorous shaking by hand. The explant materials were then immersed in 70% v/v ethanol for 3-5 s and immediately transferred to 250 ml of a sterilisation solution consisting of 5% w/v sodium hypochlorite (NaClO) with 1-2 drops of Tween-20. Bottles containing explant materials and the sterilisation solution were incubated with a SF1 flask shaker (Stuart Scientific, UK) for 20 min at 500 oscillations/min. The sterilisation solution was decanted in a laminar flow cabinet under sterile conditions, and the cuttings were rinsed three times with sterilised deionised double-distilled water. Two different in vitro culture media compositions (M1 and M2) were tested for their suitability for the in vitro propagation of selected yam accessions ( Table 1). The effects on plant growth of both media with and without activated charcoal (AC) were tested.
Both media compositions were adjusted to pH 5.8 using 0.1 M NaOH solution and then supplemented with 2 ml/l of plant preservative mixture (Plant Cell Technology, USA) and 2 g/l Phytagel™ (Sigma-Aldrich, UK). Half of the culture tubes for each medium were supplemented with 0.2% w/v AC. Of media, 8 ml was dispensed into culture tubes (specimen tubes soda glass poly stopper 100 × 25 mm, G050/30, Fisher brand, USA) and autoclaved. All chemicals were obtained from Sigma-Aldrich UK, unless otherwise indicated.
Under sterile conditions, surface-sterilised explant materials were sized to 1.0-1.5 cm length, each containing a single node with axillary buds, and placed in culture tubes containing one of the two culture media. Culture tubes were placed in a plant growth incubation room where the temperature was maintained at 25 ± 1°C and the light was provided by cool white fluorescent lamps with 30-50 μmol/(m 2 ·s) for a 16-h photoperiod. The fresh weight of the plantlets was recorded after ten weeks by removing the plantlets from the tubes. The data collected on fresh weight of 145 individual tissue culture tubes (Table S1) were analysed for statistical significance using analysis of variance (ANOVA). Post hoc Tukey HSD tests were performed for multiple comparisons. The statistical analysis was performed using the R statistical software package [57].
Following the establishment of a robust in vitro propagation protocol for D. alata and D. rotundata germplasm, all yam material grown at NRI was conserved in M2 media and culture tubes placed in an A1000 tissue culture chamber (Conviron, UK) maintained at 28°C and 50% humidity and with light provided by 21 W T5/840 cool white fluorescent lamps with 30-50 μmol/(m 2 ·s) for a 16-h photoperiod.
RNA extraction for NGS
Tissue-cultured plants (pool of three tissue culture tubes) of D. rotundata (cv. Makakusa) grown in vitro for six weeks were used for RNA extraction. Total RNA was extracted from leaf tissues using a modified cetyltrimethyl ammonium bromide (CTAB) method combined with the RNeasy Plant Mini Kit (Qiagen GmbH, Germany). Briefly, 100 mg of leaf tissue snap-frozen in liquid nitrogen was ground in gauge bags (10 cm × 15 cm) (Polybags Ltd, UK) until it became a smooth paste. Pre-warmed (1 ml) CTAB extraction buffer (2% w/v CTAB, 100 mM Tris-HCl, pH 8.0, 20 mM EDTA, 1.4 M NaCl, and 1% v/v β-mercaptoethanol) was added immediately and the tissue was further ground. Plant extract (600 μl) was transferred into a sterile microcentrifuge tube. The tube was briefly vortexed and then incubated at 60°C for 10 min, mixing the samples by inversion every 2 min. Samples were then allowed to cool to room temperature and an equal volume of phenol:chloroform:isoamyl alcohol (25:24:1) was added. Samples were mixed vigorously by inverting approximately 50 times, followed by centrifugation at 15,800 g for 10 min. The supernatant (400 μl) was transferred into a new sterile microcentrifuge tube to which an equal amount of 100% molecular grade ethanol was added. Samples were mixed, and the mixtures were immediately transferred to RNeasy mini spin columns supplied in 2-ml collection tubes provided with the RNeasy Plant Mini kit. From this step until the elution of the RNA, the RNeasy Plant Mini Kit manufacturer protocol was followed.
Virus genome characterisation
The assembled transcripts were used for similarity searches in the NCBI GenBank databases (http://www.ncbi.nlm.nih.gov/genbank/) using BLAST [61]. Full-length genome sequences were further analysed in Geneious v10.2.3 and putative ORFs were identified using the NCBI ORF finder (https://www.ncbi.nlm.nih.gov/orffinder/). Conserved domains of the putative gene products were searched using the NCBI conserved domain tool (http://www.ncbi.nlm.nih.gov/Structure/cdd/ wrpsb.cgi). Genome maps were generated using SnapGene ® Viewer version 4.1 (from GSL Biotech; available at snapgene.com). Multiple alignments of partial 528-bp reverse transcriptase (RT)-ribonuclease H (RNaseH) badnavirus sequences, of the RT-RNaseH gene used for taxonomic assessment of badnaviruses [19], and alignments of the 1184-bp-long YMV nuclear inclusion B-coat protein 3′-untranslated region (NIb-CP-3′-UTR) according to Bousalem et al. [62], were performed using the CLUSTALW default settings in MEGA7 [63]. Complete badnavirus genomes were aligned using Multiple Alignment using Fast Fourier Transform (MAFFT; http://www.ebi.ac.uk/Tools/msa/mafft/) [64]. Phylogenetic analysis was performed in MEGA7 using maximumlikelihood methods based on the Hasegawa-Kishino-Yano model [65]. The robustness of each tree was determined by generating a bootstrap consensus tree using 1000 replicates. Virus sequences obtained from GenBank were used for comparative analyses and accession numbers are shown in the phylogenetic trees. Recombination analysis was performed using the RDP4 software package with default settings [66] and recently described by Bömer et al. [31] in a study on full-length DBV genomes.
Establishment of a robust in vitro propagation methodology for yam germplasm
The effects of the two culture media compositions M1 and M2 and of AC on the fresh weight of yam after 70 days of growth in tissue culture were analysed to establish their impact on in vitro propagation of seven accessions of the species D. alata and D. rotundata. After 70 days in culture, fresh weights of the yam plantlets were recorded and analysed for statistical significance. Both media compositions induced growth of complete plantlets (with shoots and roots) in all yam material tested. The dataset comprised 145 plantlets (Table S1) and was subsequently analysed using three-way ANOVA and post hoc Tukey HSD tests. Analysis revealed a significant effect of the in vitro culture media on plant fresh weight (P = 0.000198, df = 1, F = 14.765) ( Fig. 2A). Accessions grown on tissue culture medium M2 had a higher mean fresh weight (1.52 g) than those grown on M1 (1.12 g).
The AC has been reported to improve the growth of some plants in culture, possibly through a combination of its effects on light penetration and its ability to adsorb polyphenolics and other compounds that would otherwise accumulate in the culture medium [67,68]. Here, the effect of media supplemented with 0.2% w/v and without AC on fresh weight development was evaluated. The three-way ANOVA showed a significant effect on fresh weight with the addition of AC to the media (P = 0.00104, df = 1, F = 11.311) and average fresh weights were increased by 0.2 g (from 1.21 to 1.41 g) ( Fig. 2A).
Moreover, the analysis showed that different accessions had significantly different fresh weights (P < 0.001, df = 6, F = 61.748). The D. alata breeding line TDa 95/310 had the highest mean weight (2.4 g), and D. rotundata landrace Nwopoko had the lowest (0.6 g) (Fig. 2B). A significant interaction between accession and media was also observed (P = 0.0014, df = 6, F = 3.880), showing that line TDr 89/02665 performed better on M1 (1.12 g) than M2 (0.99 g), whereas all other tested lines developed higher mean fresh weights when incubated on M2 (Fig. S1). The biggest difference in fresh weight between M1 and M2 was observed in TDa 99/00240. While fresh weights of tissue cultures differed as a function of media and accession, the significant interaction between media and accession suggests that in vitro propagation methods specific to an accession could be developed. The D. alata accessions TDa 99/00240 and TDa 95/310 developed more fresh weight than D. rotundata material. In summary, tissue culture media M2 induced higher mean fresh weights than M1 and hence can be described as a robust yam tissue culture media composition for the in vitro multiplication of D. alata and D. rotundata.
NGS reveals virus infections in yam tissue culture plantlets
Following the establishment of a standardised and robust in vitro propagation methodology for D. alata and D. rotundata genotypes, we decided to test NGS-based virus detection in a selected yam landrace as a case study for a combined approach of virus diagnostics by NGS in yam tissue culture. For this, leaves of three D. rotundata (cv. Makakusa) plantlets were pooled (Fig. 3A) and high-quality total RNA was extracted ( Fig. 3B) for Illumina RNA sequencing. Over 38 million reads were generated for the Makakusa yam sample and assembled using the Trinity pipeline. The RNA-seq assembled transcripts were mapped to a custom-made BLAST database containing complete YMV and badnavirus genomes publicly available from the NCBI GenBank. This approach resulted in three transcripts, of which two mapped to the DBRTV3 genome ( [31]; GenBank MF476845) and one mapped to the YMV genome ( [15]; GenBank U42596), indicating the presence of a mixed infection with a DBRTV3-like badnavirus and a YMV Nigeria isolate (YMV-NG) in cv. Makakusa. We propose the names "Dioscorea bacilliform RT virus, isolate DBRTV3-[2RT]" and "Dioscorea bacilliform RT virus, isolate DBRTV3-[3RT]" for the two DBRTV3-like Table 1. badnavirus transcripts. We reconstructed the 5′-ends of the DBRTV3-[2RT] and DBRTV3-[3RT] genomes by extending the mapped contigs with the raw RNA-seq reads using the Geneious [60] iterative assembler with ten iterations. Two single contigs of 7453 and 7448 bp were recovered and represent the complete DBRTV3-[2RT] and DBRTV3-[3RT] badnavirus genomes, respectively. The raw RNA-seq reads were also remapped to the Trinity-assembled transcripts to get an approximate number of reads (below 1% of total reads for all three viral genomes) representing the identified virus genomes and interestingly showing a strong bias in the sequencing towards 3′-end of transcripts ( Fig. 3C-E). This non-uniformity of read coverage is likely to have been caused by the use of oligo-dT beads to capture polyA tails in the library preparation technology [69,70].
Characterisation of members of the genera Badnavirus and Potyvirus identified in a yam landrace from Nigeria
The assembly of three full-length viral genomes derived from cv. Makakusa was achieved using Illumina HiSeq4000 RNA sequencing based on total RNA extracted from tissue culture leaves showing mild viral symptoms (Fig. 3A). New members of the genera Badnavirus and Raw RNA-seq reads were mapped to Trinity-assembled transcripts using Geneious software [60]. Contig TRINITY_DN10230_c4_g4_i1 (C) showed high sequence similarity (> 83%) to YMV (GenBank U42596) in BLAST searches and > 337,000 reads (0.88% of total reads) mapped to this contig. Contigs TRINITY_DN11412_c7_g2_i9 (D) and TRINITY_DN11412_c7_g2_i2 (E) showed high sequence similarity (88-89%) to DBRTV3 (GenBank MF476845) and > 338,000 reads (0.88% and 0.89% of total reads, respectively) mapped to each of these contigs. [19]. The CP and movement protein (MP) described by Xu et al. [72] were also located. A circular representation of the DBRTV3-[2RT] genome is shown in Fig. 4, highlighting all features typical of genomes in the genus Badnavirus of family Caulimoviridae.
Molecular phylogenetic analysis was undertaken based on 528-bp partial nucleotide sequences of the badnavirus RT-RNaseH domains of DBRTV3-[2RT], DBRTV3, DBALV, DBALV2, DBESV, DBRTV1, DBRTV2, DBTRV, DBSNV, and 19 additional yam badnavirus sequences available in the GenBank database with nucleotide identity values > 80% relative to DBRTV3-[2RT] in similarity searches with NCBI BLAST. DBRTV3-[2RT] is 93% identical to the sequence of an endogenous DBV described by Umber et al. ( [14], eDBV5 clone S1un5Dr, GenBank KF830000) and was found to belong to the monophyletic species group K5 described by Kenyon et al. [24] (Fig. 5A). A second phylogenetic analysis was undertaken using the publicly available full-length genomes of eight DBVs and of badnavirus type members from five host plants other than yam (Fig. 5B). Yam badnaviruses form a well-supported clade in which DBRTV3, DBRTV3-[2RT], and DBRTV3-[3RT] group closely together and represent sister taxa of DBSNV in the genus Badnavirus, which we previously reported for DBRTV3 [31].
We recently identified a unique recombination event in DBRTV3 using recombination analysis with full-length DBV genome sequences, with DBSNV likely to be the major parent and DBALV the minor parent, providing the first evidence for recombination in yam badnaviruses [31]. Here, we repeated the same recombination analysis, replacing the DBRTV3 genome with that of DBRTV3-[2RT]. This analysis detected a total of 11 possible recombination events (Table S2). Interestingly, a very similar event (based on the location of the breakpoints) to that identified for DBRTV3 in our previous study [31] was detected here at a very high degree of confidence for DBALV instead, with all seven recombination detection methods (RDP, GENECONV, BootScan, MaxChi, Chimaera, SiScan, and 3Seq) available in RDP4 showing significant P values (Table S2) [66]. The putative recombination site was in the IGR of DBALV and extended into the 5′-end of ORF1. DBALV was identified as the likely recombinant, with DBRTV3-[2RT] being the virus most closely related to the minor parent (Table S2); however, the RDP4 software highlighted the possibility that DBRTV3-[2RT] is the actual recombinant and DBALV the minor parent. DBSNV was used to infer the unknown major parent. Therefore, the identified unique recombination event is in line with the previous recombination event reported for DBRTV3 [31], adding further to the field's understanding of the extent of recombination among DBV genomes, a subject that demands further research attention in the future.
Potyvirus characterisation
The complete nucleotide sequence of the YMV-NG single-stranded, positive-sense RNA genome was determined to be 9594 bp in length, with a GC content of 41.4%. BLAST search confirmed that the YMV-NG was most similar (85% sequence identity) to the complete genome of a YMV Ivory Coast isolate ( [15]; GenBank U42596), a member of the genus Potyvirus collected and characterised in 1977 from naturally infected yams in the Ivory Coast [15,16]. Sequence analysis of YMV-NG using NCBI ORF finder revealed a single large ORF that putatively encodes a single polyprotein. This putative polyprotein is typically cleaved into functional proteins at semi-conserved sites by three selfencoded proteases, as is the case for most genomes of the family Potyviridae [74]. By comparing the YMV-NG sequence with the annotated sequence of YMV isolate Ivory Coast [15], which possesses the genome organisation of a typical member of the genus Potyvirus [74], and by using the NCBI conserved motif search, we identified sequences predicted to encode protein 1 protease (P1-Pro), helper component protease (HC-Pro), protein 3 (P3), 6-kDa peptide (6 K), cytoplasmic inclusion (CI), nuclear inclusion A protease (NIa-Pro), nuclear inclusion B RNA-dependent RNA polymerase (NIb), and the CP. A second small ORF was identified as pretty interesting Potyviridae ORF (PIPO), which is usually generated by a polymerase slippage mechanism and [73], which amplify a 579-bp fragment of the RT-RNaseH domain and are used for taxonomic assessment of badnaviruses [19]. expressed as the trans-frame protein P3N-PIPO [74][75][76][77]. A linear representation of the YMV-NG genome is shown in Fig. 6.
Molecular phylogenetic analysis was undertaken based on the NIb-CP-3′-UTR regions of YMV-NG and of 26 YMV sequences and their group assignments were compared with those described by Bousalem et al. [62]. Based on the NIb-CP-3′-UTR region, YMV-NG is most similar to a YMV partial RNA for coat protein, isolate 608 collected in Nigeria ( [62], GenBank AJ244047) and is likely to belong to group VII identified in the analysis of Bousalem et al. (Fig. 7) [62]. Interestingly, Bousalem et al. [62] reported phylogenetic topological incongruent positions for YMV isolate 608, as well as for YMV isolates TRIFIDA/C5 and CAM2, and suggested that recombination events may have occurred during the evolution of YMV. We performed recombination analysis based on the NIb-CP-3′-UTR regions of all YMV sequences used in the phylogenetic analysis shown in Fig. 7, confirming a recombination event described by Bousalem et al. [62]. TRIFIDA/C5 is the likely recombinant and isolates CGU1/C18 (group VI) and G13/C1 (group V) are likely to represent the major and minor parents, respectively. No recombination events were detected for YMV-NG (data not shown). Further phylogenetic studies and recombination analyses based on complete genome sequences of YMV isolates identified in the future might shed more light on genetic diversity and evolution of the Yam mosaic virus species within genus Potyvirus, family Potyviridae. . Alignments of partial RT-RNaseH sequences were performed in MEGA7 [63] using the CLUSTALW tool, and full genome alignments were done using MAFFT [64]. Evolutionary relationships were inferred using the maximum-likelihood method based on the Hasegawa-Kishino-Yano model [65], conducted in MEGA7. Bootstrap analysis was performed with 1000 replicates and the cut-off value was 80%. The trees are drawn to scale, with branch lengths measured in the number of substitutions per site.
Confirmation of virus presence using RT-PCR
One-step RT-PCR assays were performed to confirm the mixed infection of DBRTV3-[2RT]/DBRTV3-[3RT] and YMV-NG detected by RNA-seq in cv. Makakusa grown in tissue culture (Fig. 8). One-step RT-PCR conditions for the detection of YMV were previously described by Silva et al. [51] using primers designed by Mumford and Seal [49] that target the CP and the 3′-UTR. Specific primers for DBRTV3-[2RT] were designed in this study, targeting the RT-RNaseH region used for taxonomic assessment of badnaviruses [19], and were tested using the same one-step RT-PCR conditions chosen for the YMV assay. We tested the detection limits of both one-step RT-PCR assays by making 10-fold serial dilutions of the same total RNA sample from Makakusa that was analysed by RNA-seq, starting with a total RNA concentration of Fig. 7. Molecular phylogenetic analysis of the NIb-CP-3′-UTR region of YMV-NG (GenBank accession number MG711313) in comparison to 26 YMV sequences and their group assignments from a phylogenetic analysis by Bousalem et al. [62]. Yam mild mosaic virus (YMMV) was used as an outgroup. The sequences were aligned using the CLUSTALW tool, and the evolutionary relationships were inferred using the maximum-likelihood method based on the Hasegawa-Kishino-Yano model [65], conducted in MEGA7 [63]. Bootstrap analysis was performed with 1000 replicates and the cut-off value was 50%. The tree is drawn to scale, with branch lengths reflecting the number of substitutions per site. 175 ng/μl. Amplification products of the expected sizes were generated in both assays and DBRTV3-[2RT]/DBRTV3-[3RT] and YMV-NG infections were confirmed by Sanger sequencing, showing identical sequences to those derived from the RNA-seq analysis (data not shown). Only weak amplification products were still detectable at 1.75 ng/μl of RNA (10 −2 dilution).
Robust yam in vitro culture with potential for germplasm conservation and propagation
The use of virus-free, clonally propagated planting materials is the most effective method to control the spread of viruses infecting yam [3]. Molecular diagnostic tools such as RT-RPA [51] and CP-RT-LAMP [53] have been developed for routine detection of one such virus, YMV, which is endemic in the West African 'yam belt' [1]. These and similar tools need to be adopted and used to verify the infection status of planting material in West Africa, where efforts to boost production of virus-free seed yam and establish sustainable seed systems are ongoing [3,53]. Research into modern yam seed production methods, including vine cutting, tissue culture, aeroponics, and TIBs, highlights the importance of an integrated multiplication scheme that combines two or more methods of seed yam production [39]. Aighewi et al. [39] further concluded that these methods need to be adopted in building and sustaining a viable seed yam production system and particularly recommends that tissue culture be included in any major seed yam production scheme due to its importance in the production and maintenance of a nucleus of clean material.
In this study, we presented a standardised in vitro propagation methodology for the two most important yam species, D. alata and D. rotundata. We compared two nutrient media compositions with or without the addition of AC. Different plant growth regulators present in a plant growth medium and their concentrations have a major influence on the success of in vitro propagation. Among plant growth regulators, auxin and cytokinins are the major determinants of root and shoot initiation in plantlets grown in vitro. Organogenesis (type and extent) in plant cell cultures is determined by the proportion of auxins to cytokinins [78]. Cytokinins, such as kinetin and BAP, have been proven to promote cell division, shoot proliferation, and shoot morphogenesis and to repress root formation; whereas auxins, such as NAA and dicamba, are usually used to stimulate callus production and cell growth, to initiate shoots and rooting, to induce somatic embryogenesis, and to stimulate growth from shoot apices and shoot stem culture [79]. In this study, complete plantlets (with roots and shoots) were obtained from M1 (containing kinetin) and M2 (containing NAA + BAP) media compositions. This suggests that kinetin and the combination of BAP + NAA are both capable of inducing root and shoot organogenesis from yam nodal explant material, which is in line with results observed by Poornima and Ravishankar [46] in D. oppositifolia and D. pentaphylla.
Blackening and browning of in vitro culture media, which is caused mainly by polyphenolic compounds, is a serious problem for the regeneration of cultured plants. This phenomenon has been observed in many woody plants [80], and yams are known to contain phenolic compounds. The AC is characterised by having a very fine network of pores with a large surface area, which generates a high adsorptive capacity, and is typically incorporated in tissue culture media to prevent browning and blackening [67]. Because of its high adsorptive capacity, AC removes inhibitory substances, such as phenolic exudates coming from cuts of the explant materials, from the culture medium [68]. It also provides a dark environment, which can provide a better environment for root development in the culture by promoting the accumulation of photosensitive auxins or co-factors at the base of the shoot [67]. We observed a significant positive effect of AC on fresh weight development in cultured yam plantlets, which supports the findings of Poornima and Ravishankar [46].
Value of NGS technology for identifying viruses in yam tissue culture: a case study
The ideal propagation technique for yam multiplication needs to be efficient and allow robust virus indexing. At the IITA, tissue culture is used to conserve the yam genetic resources stored at the IITA genebank (currently 5918 accessions), and selected yam accessions are cleaned of viral diseases through meristem culture [39]. Following regeneration, tissue culture plants are tested for viral infections, and negatively indexed plants are transplanted into screenhouses for establishment. Such plants are re-indexed for viruses to ensure that plants are free from virus infection. Virus-free plants are used as sources for multiplication in vitro or under screenhouse conditions for tuber production for international distribution [81].
However, robust virus indexing of yam in vitro material is challenging for two main reasons: (1) in vitro culture is renowned for its ability to reduce virus titres, potentially bringing certain viral infections below the detection limit of even highly sensitive diagnostic tools; and (2) standard diagnostic tests usually target only a subset of known viral species. False-negative results from routine virus indexing can potentially have dramatic consequences if, for example, infected yam germplasm is internationally distributed. Therefore, we tested whether Illumina HiSeq4000 RNA sequencing has the potential for use in robust, comprehensive, unbiased, and sensitive NGS-based virus detection in yam tissue culture material when applied without prior knowledge of the viral sequences. Here, we report an optimised protocol which includes the extraction of high-quality total RNA suitable for RNA sequencing from yam tissue culture leaves, and we show that this combined tissue culture and NGS approach allows the characterisation of novel yam mosaic and badnaviruses following a relatively simple bioinformatic pipeline. This case study is a promising step in the development of NGS-based yam virus diagnostics, and we are hopeful that this technology will be adopted in certain situations where the cost is justified to support virus-free yam propagation, distribution, and germplasm conservation.
Mixed infections of YMV and yam badnaviruses
Numerous full-genome sequences of known and unknown plant viruses have been discovered using NGS-based methods and subsequently validated by molecular diagnostic protocols [82]. The detection of new members of the genera Badnavirus and Potyvirus in a selected yam landrace functioned as a first case study for NGS virus diagnostics in yam. The NGS approach revealed a mixed infection with the presence of two badnavirus transcripts (DBRTV3-[2RT] and DBRTV3-[3RT]) and a novel yam mosaic virus, YMV-NG. The RNA sequencing results support previous findings obtained using a combination of RCA and PCR for the detection of DBRTV3 [31] and RPA-based diagnostic tools [51] and confirm the usefulness of NGS in plant virology. The mixed infection was further confirmed using a one-step RT-PCR approach, and the detection limit suggested low titres for both virus infections in Makakusa tissue culture.
Endogenous viral sequences can be transcriptionally active in yam species and may be functionally expressed as described for geminiviruslike elements [83]. The majority of EPRVs described to date are fragmented, rearranged, contain inactivating mutations and are therefore replication defective and consequently non-infectious. However, it remains unclear if eDBV sequences, that have been described for four distinct badnavirus species (groups K5, K8, K9, and U12) [14], are transcriptionally active and potentially infectious. Therefore, it remains remotely possible that DBRTV3-[2RT] and DBRTV3-[3RT] were assembled from eDBV5 transcripts. Future work will be performed to test for the potential existence of eDBV forms of the DBRTV3-[2RT] and DBRTV3-[3RT] sequences in yam germplasm using Southern hybridisation techniques like those described by Seal et al. [25] and Umber et al. [14], and previously discussed for DBRTV3 [31].
Advantages of NGS over standard molecular diagnostic tools for virus detection
Almost half of emerging plant infectious diseases are viral, according to outbreak reports [84]. In the past, the detection and characterisation of novel viruses mostly relied on electron microscopy, serological methods such as ISEM and ELISA, and nucleic acid-based methods such as PCR and microarrays [85][86][87]. Efficient routine virus diagnostic tools have become easily available because of the breakthroughs made around ELISA and PCR-based assays [88,89], and both techniques and their variants have been modified for the broad-based detection of plant viruses. In their review, Prabha et al. [55] conclude that both techniques suffer from several significant drawbacks, particularly when used in diagnosing unknown viral diseases, as all these techniques are dependent on previous knowledge about viral genome sequence information for primer design or efficient monoclonal or polyclonal antibodies targeting virus epitopes. The dependence on sequence information includes novel isothermal detection methods which are now increasingly being developed including RT-RPA and CP-RT-LAMP assays for YMV detection [51,53].
The use of degenerate primers targeting conserved sites in known viral gene sequences has led to the discovery of unknown and foreign viruses. Conserved sites are identified by sequence comparison, which means that the usefulness of degenerate primers depends entirely on how well the known sequences represent the target group, including unknown sequences [90]. According to Zheng et al. [90], sampling bias in the past has misled researchers attempting to identify conserved target sites ('consensus decay') to design degenerate primers targeting the genus Potyvirus, and regular updating of primer design is needed. The degenerate badnavirus-specific primer pair Badna-FP/-RP [73] has led to the discovery of several hundred badnavirus sequences across different plant hosts and hence is a good example of the usefulness and power of this approach. However, in the case of yam badnaviruses, the extreme heterogeneity of DBVs [26], mixed infections [27], and presence of integrated counterparts in the form of complex mixtures of eDBV sequences [33] means that there is still a need for the development of a robust diagnostic test for all episomal DBVs. Current diagnostic practices for DBV screening using the Badna-FP/-RP primer pair are likely to introduce many false positive results due to the presence of eDBV sequences in D. cayenensis-rotundata genomes [14,25,27,33], which cannot be distinguished from DBVs in a simple Badna-PCR. Additionally, false-negative results cannot be excluded because of sequence heterogeneity and the presence of mixed infections and potentially low titres.
Compared with routine serological and nucleic acid-based diagnostic methods, NGS technologies can provide a more comprehensive picture of the entire plant virome in a selected sample where the additional cost of NGS can be justified. The NGS enables the unbiased detection and discovery of novel viruses and their complete genomes without prior knowledge of the viral sequences. These massive parallel sequencing approaches advance our understanding of viral genome variability, evolution within the host, and virus defence mechanism in plants and are therefore extremely useful for plant virology [55,91], although the infectivity of some identified viral sequences cannot be determined from some NGS datasets. The NGS-based virus diagnostic approaches enable the characterisation of complete viral genome sequences, which can then be used for phylogenetic or recombination analysis as shown in this study. The discovery and characterisation of larger numbers of complete viral genome sequences will increase our understanding of viral evolution and the molecular interactions between plant viruses and their hosts.
Whereas the future points to adoption of NGS approaches in routine plant virus discovery and characterisation, several challenges remain to be addressed; for example, dependency of available classification algorithms on homology despite high diversity in viral sequences and limited reference viral genomes in public databases. Secondly, the analysis tools are less intuitive to use, prompting specialised bioinformatics expertise and expensive computational resources. This has become a major bottleneck in making NGS approaches affordable despite the massive reduction in the cost of sequencing over the past decade.
Conclusions
We present a case study for sensitive NGS-based virus detection in yam plants grown using a robust tissue culture methodology. In vitro culture media compositions containing different plant growth hormones were compared, and a standardised protocol for yam tissue culture, high-quality total RNA extraction, and NGS analysis was developed. Illumina HiSeq4000 RNA sequencing from leaf material grown in tissue culture was utilised to identify novel members of the genera Badnavirus and Potyvirus, highlighting the utility of NGS-based virus diagnostics in yam. Two badnavirus isolates, DBRTV3-[2RT] and DBRTV3-[3RT], as well as a novel Yam mosaic virus isolate, YMV-NG, were detected in a cv. Makakusa sample from Nigeria, and complete genomes were assembled and characterised for these three viral isolates. The YMV and badnavirus infections were confirmed in RNA extracted from tissue-cultured plant material using one-step RT-PCR. This study presents a promising first step towards developing a robust in vitro propagation and NGS-based virus detection protocol, and confirms the value of NGS in safe movement of germplasm.
Conflicts of interest
None. | 9,132.2 | 2019-01-01T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Recent experiments on three nucleon systems and problems to be solved
After 2π3NF was found in 1998, many experiments were made on Nd elastic scattering, Nd breakup andpd capture, and many discrepancies between experiments and calculations were revealed. Systematic experimental data are still being accumulated. From the systematic data, 3 NF other than 2 π3NF such asπρ3NF and ρρ3NF, and origins of low-energy anomalies are expected to be found in the future.
Introduction
One of the purposes to study three-nucleon (3N) systems is to find effects of three-nucleon forces (3NF) and to determine their strengths. As is well known, Fujita-Miyazawa predicted existence of 2π-exchange three nucleon force (2π3NF) in 1957 [1]. Faddeev equations for 3N systems have been numerically solved since late 1960's. In 1980's, it became widely known that 3 H binding energy cannot be reproduced by 2NF alone and can be reproduced using 2π3NF with an adjustable parameter. To justify the value of the parameter, further evidences were necessary.
From systematic measurement of pd elastic scattering cross section in energy range of E p = 2 -18 MeV at Kyushu University tandem laboratory (KUTL), systematic discrepancy between experiment and calculation in the cross section minima around 110 • was found in 1994 [2]. The discrepancy was, however, paid no attention because Coulomb force was not correctly treated in pd calculations at that time. In 1996, pd scattering cross section was measured at E d = 270 MeV (E p = 135 MeV) to construct a d-beam polarimeter at RIKEN. Koike found by chance the same discrepancy at the cross section minimum also at 135 MeV, and he introduced the discrepancy as Sagara discrepancy in FB15 in 1997 [3]. In 1998, Witała et al., excellently solved the binding energy problem and Sagara discrepancy by introducing the same 2π3NF [4].
After 2π3NF was discovered, many theoretical studies and experiments on 3NF have been made. The experimental studies have been widely made at higher energy region on Nd elastic scattering, and also made on pd breakup and pd capture.
In pd elastic scattering, many kinds of spin observables, such as analyzing powers of A y , iT 11 , A yy , A xx and A xz , and polarization transfer coefficients have been measured. Also cross section of pd elastic scattering was measured at various energies. In pd breakup, cross section and A y have been measured. Our experiment on pd breakup at E p = 247 MeV is presented in this conference [5]. In pd a e-mail<EMAIL_ADDRESS>capture, tensor analyzing powers A yy , A zz and also A xx have been measured in the last decade at E d = 100 -200 MeV.
All the experimental observables at higher energy disagree more or less with calculations even after 2π3NF being included. The disagreements seem, at least in part, to be caused by 3NF other than 2π3NF. We report the disagreements in some detail later.
At low energy region, there are long-standing problems of A y puzzle and Space Star anomaly (SS anomaly), which seem to be irrelevant to 3NF. Now, we have sufficient data for A y puzzle. Experimentalists are just waiting for theoretical investigations. As for SS anomaly, which is a discrepancy between experiment and calculation in Nd breakup cross section around 10 MeV, there were a few experiments in 20 th century, because there were no reliable calculations on pd breakup and SS anomaly was studied only in nd breakup. Experiments on pd breakup are far more precise and far easier than nd breakup experiments. Experimentalists desired for a long time for pd breakup calculations. During the time, for example, combination of nd breakup Faddeev calcualtion and Watson-Migdal pp FSI formula was tried to approximate pd breakup calculation, and experimental data were fairly well reproduced.
A breakthrough was made by A. Deltuva et al. in 2005[6]. They succeeded in calculations of all kinds of pd reactions including pd breakup using fast-damping screened Coulomb force. After the success of pd breakup calculation, we started to measure pd breakup cross section systematically, to search for origin(s) of the star anomaly. Figure 1 illustrates discrepancies in 3N systems. We have already solved discrepancies in 3N binding energy and in pd scattering cross section minimum by 2π3NF. There are still many disagreements remaining at higher energy as well as at lower energy. Some of the disagreements may indicate effects of 3NF other than 2π3NF. EPJ Web of Conferences pd CS discrepancy 1 9 9 4 -1 9 9 6 -1 9 9 7 ( S a g a r a d is c r e p a n c y ) 3N BE problem
Experiments on pd elastic scattering at higher energy
After 2π3NF was discovered, many experiments were made on pd elastic scattering and 2π3NF effects were examined. As so many groups measured pd and nd elastic scattering, we Kyushu group did not measure it, instead, measured pd breakup and pd capture. Below E p = 200 MeV, the cross section minimum of elastic scattering was well reproduced by introducing 2π3NF. Above 140 MeV, it was found that the scattering cross section at backward angle becomes larger than calculation, and the disagreement increases monotonically with energy. Experimental values are about twice of calculated values at 250 MeV [7]. This systematic disagreement seems to indicate effects of short-range 3NF other than 2π3NF and/or relativistic effects.
Many kinds of polarization observables of pd scattering have been measured. They were found to disagree with calculations. Disagreements in polarization observables are complicated and are not so large as that of the cross section (for example see [8]). Besides systematic feature like cross section enhancement has not been found in disagreements of polarization observables.
It may be better to investigate first the systematic disagreement in the cross section of elastic scattering, and to study other disagreements after the problems in the cross section are completely solved. Cross section is a basic scalar quantity, and modification of cross section influences more or less polarization observables.
Experiments on pd breakup at higher energy
After 2π3NF was discovered, we started pd breakup experiment at E p = 247 MeV at RCNP. To see global feature, we first made D(p,p 1 )p 2 n experiment by detecting only one proton p1 out of three outgoing nucleons. To significantly reduce backgrounds from the target, we used an almost pure liquid D 2 target instead of an ordinary CD 2 target. We had developed the liquid hydrogen target for our pd capture experiment described below.
Experimental results for D(p,p 1 )p 2 n cross section at 247 MeV are shown in Figure 2 with calculations by Witała. Measured cross section is larger than calculation. The disagreement increases at forward angle. Effects of 2π3NF are not enough to explain the experiment. We measured also A y in the same experiment, but we first investigate cross section disagreement.
The disagreement is of similar magnitude with the disagreement in pd scattering cross section at background at the same energy described above. It may be natural to think that the same origin enhances both cross sections of pd elastic scattering and of pd breakup reaction.
In order to see microscopically the enhancement of cross section in pd breakup, we recently measured D(p,p 1 p 2 )n cross section at E p = 247 MeV by detecting two protons in coincidence. We focused to investigate microscopically enhancement of D(p,p 1 )p 2 n cross section at θ 1 = 15 • and E 1 being around 150 MeV. Another proton p 2 was detected at θ 2 = 35 • , 50 • , 65 • and 80 • on the opposite side of the beam axis, as reported in this conference by Kuroita [5]. In the same experiment, D(p,p 1 )p 2 n cross section was also measured again, and our previous data were completely confirmed.
In figure 3, θ 2 dependence of D(p,p 1 p 2 )n cross section at E 1 = 150 MeV is illustrated with calculations by Kamada [10]. Cross section enhancement is large at θ 2 = 35 • 05002-p.2 where cross section is large. On the contrary, cross section enhancement is small at backward θ 2 where cross section is small. When θ 1 and E 1 are fixed, the remaining pair of p 2 and n has fixed total momentum, p 2 + p n , and absolute value of relative momentum |p 2 − p n | is also fixed. At θ 1 = 15 • and θ 2 = 35 • , the three outgoing nucleons approximately form a line, i.e., satisfy the collinear condition.
Above information may be useful to guess the origin of the cross section enhancement. Experimental data at more forward angle may be also useful. To measure at forward angles we need new counters, because the present two big counters cannot come to each other closer than 48 • ( = 15 • +33 • ).
Experiments on pd capture at higher energy
By p + d → 3 He+γ reaction, pd scattering state comes to 3 He ground state. Momentum transfer is large. It is interesting to search for effects of short-range 3NF in this highmomentum transfer reaction. Cross section of pd capture is, however, very small as below 1µbarn. Hence we used a liquid hydrogen target and detected 3 He recoils simultaneously in a wide angular range from 20 • to 160 • in c.m. system.
A polarized d-beam of energy of 196 MeV from RCNP cyclotron was used in our first experiment. The beam polarization axis was in the vertical direction, and recoiled 3 He detection was made in the horizontal plane to measure A y and A yy , and in the vertical plane to measure A xx . Measured A xx and A yy took roughly the same negative values, A xx ≈ A yy , and A yy roughly agreed with calculation, but A xx remarkably disagreed with calculation.
Next we measured A xx and A yy of pd capture at E d = 137 MeV. Preliminary data indicated again the relation of A xx ≈ A yy and remarkable disagreement in A xx . Since pd capture cross section is small and identification of true events was disturbed by overwhelming background events, data analysis took time. Meanwhile, A yy and A zz of pd capture at E d = 180 MeV and 133 MeV were measured at KVI. Their data agreed with calculations. They used a vertically polarized d-beam and a liquid hydrogen target, and detected 3 He and γ ray in coincidence. A yy was measured by detecting γ-rays in the horizontal plane. A zz was measured by detecting γ-rays in two planes inclined by ±45 • from the horizontal plane, and assuming the relation A xx + A yy + A zz = 0. KVI A yy data roughly agree with our data, but A zz data are about 1.5 times smaller in magnitude than ours.
Finally, a confirming measurement was made at RCNP on A xx , A yy and A zz of pd capture at E d = 196 MeV. The d-beam was polarized in the vertical direction, 3 He recoils were detected in the vertical plane for A xx , in the horizontal plane for A yy , and in two planes inclined by ±45 • from the horizontal plane for A zz in a way similar to KVI's. Dataanalysis method was improved so as to increase 3 He detection efficiency, and both new data and previous data were analyzed by the new method.
Our previous data and new data essentially agree to each other, and indicate the relation A xx ≈ A yy and large discrepancy in A xx (also in A zz ). Figure 4 shows various "A zz " data; -A xx -A yy in our previous experiment at 196 MeV, -A xx -A yy in our new experiment at 196 MeV, and A zz (±45 • ) in our new experiment at 196 MeV, together with A zz (±45 • ) in KVI experiment at 180 MeV. Curves are calculations at 200 MeV with and without 2π3NF by Golak. Although our "A zz " data are scattered to some extent, there is a large discrepancy between our "A zz " data and calculations.
In our data A xx ≈ A yy relation holds, but calculated A xx and A yy are significantly different to each other, therefore a large discrepancy in A xx (also in A zz ) results. The relation A xx ≈ A yy means the symmetry of pd capture with respect to the z-axis (the beam axis). When a d-beam is polarized in y-direction (vertically), d-induced reactions in the verti-05002-p.3 cal (yz) plane and in the horizontal (xz) plane are expected to proceed differently in general. Therefore it is natural to expect A xx A yy also in the present case.
In many other d-induced reactions, a relation A xx ≈ −A yy holds approximately. A deuteron has a prolate shape. If a d-beam is vertically polarized and d-induced reactions take place in peripheral region of target nuclei, the reactions may be enhanced in vertical plane and suppressed in horizontal plane, or oppositely suppressed and enhanced.
The relation A xx ≈ A yy in pd capture is curious. It is interesting to investigate the origin(s) of this relation, including to estimate effects from short-range 3NF.
Experiments on pd star anomaly at low energy
Discrepancies at higher energy have candidates for their origin(s), e.g., short-range 3NF, relativity, high-angular momentum reactions. On the contrary, discrepancies at lower energy have not apparent candidates for their origin(s).
A y puzzle is well known for a long time. We have already enough data sets for A y puzzle and no systematic measurements on A y puzzle have been made recently. Many theoretical attempts such as modifications of 2NF and introduction of LS dependent 3NF have been examined, but A y puzzle has not been solved yet.
Another big problem at low energy is Space-Star anomaly. SS anomaly was found first in nd breakup at E n = 13 MeV and 10.5 MeV [11], and was confirmed at 13 MeV by another experiment [12]. Experiments of pd breakup at SS configuration were also made, and SS anomaly in pd breakup was first found when a reliable pd calculation was made [6].
At 13 MeV, nd breakup cross section at SS configuration is about 25% higher than calculation, and pd breakup cross section at SS is about 15% lower than calculation. So far no theoretical suggestions have been made for SS anomaly and its large charge asymmetry.
Because a reliable calculation on pd breakup has become available since 2005 [6], we have been making a systematic measurement of pd star cross section at E p = 13 MeV and 9.5 MeV. At Koeln University, measurement of pd star cross section at E d = 19 MeV (E p = 9.5 MeV) was made [13].
When three outgoing nucleons from Nd breakup have the same energy and form an equilateral triangle, we call the configuration as Star. When the Star triangle is perpendicular to the beam axis, we call the configuration as Space Star. An angle between Star plane and the beam axis in c.m. frame is called as α, as defined in Fig. 5. We usually detect two protons from pd breakup at symmetrical angles with respect to the beam axis, and we define α = 0 • when a p-beam is used and two detected protons are at forward angles in the horizontal plane. See Figure 5.
To see characteristics of Star anomaly, α-dependence of pd Star anomaly has been measured recently by Koeln group [13] and Kyushu group. Star configuration at α=0 • is close to QFS configuration, and possible anomaly at QFS is also being investigated at Kyushu. Y. Maeda and Y. Eguchi report on these subjects in this conference.
As seen in Figure 6, star anomaly at 13 MeV is confined at around 90 • . The plane perpendicular to the beam axis is special. Only in the perpendicular plane, pd breakup reaction is suppressed. It seems to be enough to think of curious suppression in the perpendicular plane. At 9.5 MeV, however, remarkable Star anomaly appears also at backward angles, as indicated by Koeln experiment at E d = 19 MeV. Complex consideration may be necessary to explain the wide-range Star anomaly.
Before thinking of origins of pd Star anomaly, it is better to make a confirming experiment at E d = 19 MeV. A polarized d-beam was used in Koeln experiment, but an unpolarized d-beam will be used in the confirming experiment to measure cross section alone.
Our strategy is (a) confirmation of pd Star anomaly by additional experiments, (b) investigation of origin(s) of pd Star anomaly, then (c) elucidation of nd Star anomaly. So far large charge asymmetry between nd SS anomaly and pd SS anomaly has been reported. The large charge asymmetry is hard to explain. We will first elucidate pd Star anomaly, based on systematic and reliable measurements. Elucidation of pd Star anomaly may include suggestion on the charge asymmetry. Experimental data for nd Star are insufficient at present, and reliable nd experiments are hard 05002-p.4 19 th International IUPAP Conference on Few-Body Problems in Physics to make. So we will not investigate nd Star anomaly till pd Star anomaly is completely elucidated.
Experiments on pd QFS at low energy
Cross section enhancement of 16-18% was reported in nd QFS at E n = 25 MeV and 26 MeV. Also cross section suppression in pd QFS was reported at E p = 10.5 MeV and 19 MeV.
We are making systematic measurement of pd QFS cross section at KUTL, and no apparent pd QFS anomaly has been found at both 9.5 MeV and 13 MeV. We will measure pd QFS cross section at 10.5 MeV and 19 MeV to see if pd QFS anomaly exists or not.
Summary
Studies of 3N systems are summarized and illustrated in Figure 1. We are on the way to search for short-range 3NF at higher energy in pd scattering, pd breakup and pd capture, and to investigate origins of Star anomaly as well as of A y puzzle at low energy. Experimental studies have made steady progress. At low energy, success of reliable pd calculation enabled systematic studies of pd Star anomaly.
Challenging 3N calculations aiming to solve remaining problems in 3N reactions are expected. | 4,326 | 2010-01-01T00:00:00.000 | [
"Physics"
] |
Scalability Performance for Low Power Wide Area Network Technology using Multiple Gateways
—Low Power Wide Area Network is one of the leading technologies for the Internet of Things. The capability to scale is one of the advantage criteria for a technology to compare to each other. The technology uses a star network topology for communication between the end-node and gateway. The star network topology enables the network to support a large number of end-nodes and with multiple of gateways deployed in the network, it can increase the number of end nodes even more. This paper aims to investigate the performance of the Low Power Wide Area Network Technology, focusing on the capability of the network to scale using multiple gateways as receivers. We model the network system based on the communication behaviours between the end-node and gateways. We also included the communication limit range for the data signal from the end-node to successfully be received by the gateways. The performance of the scalability for the Low Power Wide Area Network Technology is shown by the successfully received packet data at the gateways. The simulation to study the scalability was done based on several parameters, such as the number of end-nodes, gateways, channels and also application time. The results show that the amount of successfully received data signal at gateway increased as the gateways, application time and channel used increased.
as high coverage, low bandwidth, and low power consumption, are in line with the requirements of IoT applications that only need to transmit small data sizes remotely.
LPWAN technology is designed to support billions of devices for the various applications of IoT. The technology uses a star topology architecture in which multiple end-nodes communicate directly to the gateway [3]. However, numerous end-nodes transmit the data signal to the gateway and this cause traffic overload, and eventually, there will be data signal loss at the gateway. Increasing the number of gateways can reduce the data signal overload by the single gateway. Even so, the scalability analysis frequently only used a single gateway to study the performance of the LPWAN.
In this study, the performance of the LPWAN in the capability to scale using multiple gateways. The development and simulation of the proposed network model were done based on the collision behaviour of the data signal from the end-node at the gateway using the MATLAB platform. The organization of this paper is as follow: In Section 2, related work on the previous study was presented. Then, the introduction of Low Power Wide Area Networks was presented in Section 3. Next, the proposed network model was discussed in section 4. In Section 5, the simulation procedure and parameters were discussed, and result and discussion were presented in Section 6. Finally, conclude the paper in conclusion.
II. RELATED WORK
The previous work on several studies on model development for LPWAN has been conducted previously for a better understanding of LPWAN's ability to scale. This section presents the previous studies on several works focusing on modelling and scalability of LPWAN.
The authors in [9] also developed a LoRa model similar to the model from [7]. The scalability of the LoRa network was studied by observing the most significant possible number of LoRa transmitter while satisfying the average packet success probability. The other model was developed by the authors in [11] to study the scalability of LoRa technology. The model used LoRa interference behaviour for the development of the data signal collision model. Meanwhile, the authors in [12] develop LoRaWAN simulator study the scalability of the LPWAN. The development of the packet collision model was inspired by the collision model from [11] to determine the behaviours of the data signal collision and the capture effect. The investigation in [13] shows improvement in the network scalability when using a method which assigning the SF used by the end-nodes in the network.
III. LOW POWER WIDE AREA NETWORK
Low Power Wide Area Network is a wireless communication technology that enables end nodes to communicate over long distance using low bit rates and low energy consumption [14][15] [16]. Previous studies have shown that LPWAN technology enables the final node to communicate with gates over a distance of 3 kilometres for urban areas, while more than 10 kilometres for rural areas [17]. Additionally, in the line of sight circumstance, the last node data signal can reach a gateway located 20 kilometres away [18] can still reach the gateway as far as 30 kilometres, as reported in [19].
The ability of end-nodes to communicate remotely with a gateway is based on two main special features of the LPWAN, the star network topology and modulation technique. The LPWAN device mostly operates in the unlicensed Industrial, Scientific and Medical (ISM) bands at 169, 433, 868/915 MHz, and 2.4 GHz [20]. However, these frequency values [21] [22] depend on the region in which the technology is being used.
Dynamic progress in LPWAN technology development has created many LPWAN-based applications and solutions in the market. The current most known LPWAN technologies are Sigfox and Semtech. The Sigfox technology uses three main components for the communication, which are Ultra Narrow Band radio technology, Binary Phase Shift Keying and Gaussian Frequency Shift Keying modulation. Typically, depending on the region, the ISM band used by the technology is at 868 to 869 MHz and 902 to 928 MHz. Sigfox devices are capable of sending small data with 12 bytes of maximum data size for uplink data while 8 bytes of downlink using the Lightweight protocol. Altogether, the Sigfox frame uses 26 bytes, with 12 bytes of load data and 14 bytes for protocol overhead. This protocol overhead is smaller than conventional LPWAN technology, which applies more significant size protocol overheads to transmit data. [23].
In addition to Sigfox, Semtech also developed the LPWAN technology known as LoRa Technology. The technology is designed for a combination of remote, low power consumption, and secure small-size data transmission. It also operates on an unlicensed SUB-GHz ISM band using a so-called chirp spread spectrum (CSS) modulation to optimize power consumption and broader communications networks. LoRa Technology uses the combination of two layers; the physical layer is known as LoRa for the connectivity and the MAC layer known as LoRaWan.
IV. NETWORK MODEL
This section describes the proposed network model to study the scalability of LPWAN. The communication model in this study mimics the communication protocol between end-node and gateway for scalability study purposes. The following are assumptions for behaviour of data signal from end-node to be received by the gateway based on [3], [7], [11].
A. The Interference Conditions
In this model, the end-nodes are group into two types which known as reference node and interference node. The reference node is current end-node transmitting data to the gateway at present. While, the interference node refers to others end-nodes beside the reference node that transmitting data signal before, present, or after the reference node transmitting data signal. The received status of data signal for the reference and interruption nodes at gateway can determine whether data signal is successfully received based on the collision condition.
Data signal interruption between the interference and reference nodes are assumed to base on three main parameters; SF, channel and transmission time. If data signals arrive at the gateway from the reference and interference nodes which use the same SF and channel, then all data signal www.ijacsa.thesai.org are considered unsuccessfully received by the gateway. The gateway will receive all data signals if the SF and channel used are different. Data signal in this condition is said to be orthogonal to each other. Table I provides detail of the interference condition for both reference and interference nodes. Data signal interference happens when both of the reference and interference nodes have the same SF and channel. However, data signal can be successfully received by the gateway if both data signal of the reference and interference nodes are being downloaded by the gateway, which passes the preamble time of data signal. Fig. 1 illustrates all possible interference conditions by the endnodes.
Data signal for the interference node in Case 1 and 6 are successfully received by the gateway as there is no data signal collision with the reference node. In Case 2, data signal from the reference node has successfully received to the gateway. Data signal arrives at the time where the preamble of data signal for the interference node already being downloaded by the gateway. In this situation, both of data signals from the reference and interference node are successfully downloaded by the gateway. This situation is also fit for Case 6, where the roles of the reference and interference nodes exchange. When data signal from the reference node arrives during the preamble of data signal from interference node is being downloaded, both of data signals are assumed to be not received by the gateway as shown in Case 3. This condition is the same as Case 4, and Case 5, where the roles of reference and interference node exchange. Table II shows the received status of the interference and reference node at the gateway.
B. SF Selection
In this study, the SF selections used in the network model for the end-nodes was inspired by [2]. The selection of the SF depends on the distance between the end-node and the gateway. When gateway received data signal from the endnode, it also records the RSSI and SNR value of data signal. Typically, the RSSI and SNR values increase when the distance between the end-node and gateway increases. However, the data signal may attenuate depending on the condition of line-of-sight between the end-node and gateway, which results in increasing the recorded RSSI and SNR values. The end-node requires a higher SF to transmit data signal to the gateway, depending on data signal condition [11].
From previous study, the assumption made for the endnode which located far away from the gateway will use the SF of 12. Data signal is expected to be able to reach the gateway. However, data signal cannot be received by a gateway if the location of the end-node is located too far due to data signal attenuation. It is reasonable to have a limit distance between end-node and gateway for data signal to successfully receive by the gateway. Table III shows the selection of SF value for the data transmission base on the distance between the endnode and gateway. The assumption for the distance between the end-node and gateway is 2 kilometres for each SF. When the distance is over 12 kilometres, the signal is lost and did not received by the gateway.
C. Gateways Location
In this study, multiple gateways were used to receive the data signal from the end-nodes. The number of gateways used are 2, 4, and 6. The model used 2-dimensional network field with the same length, (L x L). The locations of the gateway were based on the length (L) of the network field. Let say the coordinate of a gateway, GW(g) = (GWx, GWy) where g is the number of the gateway, GWx is the coordinate in x-axis and GWy is the coordinate in y-axis. Then the locations of the gateway were based on the number of getaways used and were given in below equations.
V. SIMULATION
The performance of proposed model is executed via simulation using MATLAB platform. Let say there are N numbers of end-nodes distributed randomly in Lx x Ly twodimensional network field. The end-nodes are assumed to use specific SF based on the distance between the end-node and the gateway d, as discussed in the previous section. Let D(n) = (x(n), y(n)) be the coordinate of the distributed end-nodes and GW(g) = (GWx, GWy) be the coordinate of the gateway location. Where n = {1, 2, 3…, N) and g is the number of gateways. Then, the distance for the end-node j from the gateway g is defined as; d(GW(g), D(n)) = [ ( (g)x(n)) 2 + ( (g)-y(n)) 2 ] 1/2 (13) Typically, in LPWAN, one end-node can transmit data signal and is received by multiple gateways. The network will decide which optimal gateway for the next data transmission of the end-node based on the link strength at the gateway [11]. The received signal strength at the gateway is mainly related to the distance between the node and the gateway. The proposed model used these conditions for the end-node to choose the nearest gateway to transmit the data signal. Then, the network will assign the SF based on the distance for the end-node between the end-node and the gateway.
Additionally, the gateway also randomly assigned the channel (CH(n)) for the end-nodes in the range of [1,CH], where CH is the total number of channels. The starting time is assigned randomly for the end-nodes to start transmitting the packet data to mimic the real application of the end-nodes.
Then, the end-node starts to transmit the packet data to the corresponding gateway based on its starting time. When the end-node complete transmitting the packet data, the end-node will set the new starting time (New_ST(n)) with the combination of the starting time, time-on-air (ToA), and processing time (PT). Processing time is time for the end-node to process the data for the next transmission sequel in the range of [0,1.000s]. Note that, the maximum time for endnodes to process the data is assumed to be 1s. ToA is the time for data signal from the end-node to successfully receive by the gateway. However, it depends on the size of the payload, bandwidth, SF and code rate used by the end-node for data transmission. Refer to [24] for more information on ToA.
New_ST(n) = ST(n) + ToA(n) + PT(n) (17) At each of the gateway, data signal is successfully received based on the interference conditions as discussed in the previous section. The simulation is run based on round. Each round ends if all the end-node complete transmitting data signal to the gateway. Once the run has reached the designated total run, the simulation stops. Then, the program calculates the percentage of received packet data (PPD). PPD is the percentage of the total received data signal at gateway over the total number of data signal transmission from end-node. Table IV shows the parameter used in the simulation. VI. RESULTS AND ANALYSIS Fig. 2 below shows the example of 500 end-nodes (represented by blank round shapes) with four gateways (represented black round shapes) in the 24000m x 24000m network field. The different colour of the blank round shapes indicates the end-nodes that transmitted data signal to the respective gateway.
Next, Fig. 3 to 5 shows the percentage of the end-node per gateway. The percentage value is calculated based on the average total number of the end-node transmitted to the gateway for 50 rounds. The average number of end-node per gateway varied as the total number of the gateway increased. The end-nodes are located randomly in the network field while the locations of the gateways are fixed. The unbalance numbers of end-node per gateway will affect data signal throughput of the gateway. For example, as shown in Fig. 5, the total number of end-nodes transmitting data signal to gateway 1, 3, 4, and 6 are higher compared to gateway 2 and 5. The higher number of end-node per gateways will have a higher chance for data signal of end-nodes to collide with each other during data transmission.
In the simulation results, the effect of different parameters can be observed on the PPD using the proposed model. Fig. 6 to 9 show the results for PPD using one and 8-Channels with different application time and size of the network field. Overall, the PPD value decreases when the number of endnodes increases. Observations of this reduction in PPD occur because the amount of data signal from the sensor node rises, resulting in more data signal arriving at the gateways. These increase the chance of data signal to collide with each other's resulting in data signal loss at the gateway.
The overall value of PPD shown in Fig. 6 increases when the number of gateways increases. Increasing the number of gateways will decrease the throughput load by a single gateway. Besides, increasing the number of channel in the network will also increasing the PPD value. Referring to the interference conditions of data signal, data signal with the different channel will avoid the collision.
Meanwhile, the result in Fig. 7 shows a similar pattern as in Fig. 6. Increasing value of application time results in increasing the PPD value. This is shown in Fig. 6 with the application time of 600s, while in Fig. 7 with the application time of 3600s. In a single channel with two gateways, the PPD value of the application time of 3600s gives a similar result to the PPD value when using 8-channel with two gateways with an application time of 300s. The higher application time increases the time difference of the starting time between the end-nodes (refer to equation 15). This increment in time minimizes the number of end-nodes that has the same or similar starting time. Then, the chance for data signal to collide with each other is also reduced. Although the PPD value increases when either the number of gateway or channel increases, increasing the channel gives better performance compared to increasing the number of gateways. Meanwhile, increasing the amount of application time gives better results of PPD compared to increasing the number of gateway or channel. Fig. 6 and 7 but in double size of the network field. Overall, the PPD value also decreases as the number of end-nodes increases which are similar to the previous results but with a lower PPD value. Increasing the amount of channel or the value application time results in increasing the PPD value. The PPD value when using only two gateways is at 50% and below. This effect is due to increasing the size of the network field. This decrement value of PPD indicates that only half of data signal from the end-node received by the gateways. The gateway solely collected data signal when the location of the one-nodes was in the range of the set limit distance. However, the value of PPD increases when using more gateways to receive data signal.
VII. CONCLUSION
In this paper, the development and simulation of a comprehensive model of LPWAN to study the scalability using MATLAB simulator is presented. This model includes several assumptions based on the behaviour of LPWAN communication between the end-nodes and gateways, such as the interference conditions of data signal and the selection of the Spreading Factor and the application time. The results show that increasing number of end-nodes, decreases the www.ijacsa.thesai.org value of PPD. However, the PPD value increases when the number of gateways, channel and application time increase. The locations of the gateways are directly influencing total number of the end-nodes. The placements of these gateways affect the total number of the end-nodes per gateway. Data signal collisions are more likely to occur when more endnodes transmitting to a single gateway. Deploying more gateways may overcome this problem. However, in real-time application, increasing the number of gateways will double the cost. Meanwhile, selection of high performance of LPWAN devices is important in order to support high number of channels.
The future scope of the current proposed work can be developed by choosing the optimal locations of the gateways. This optimal location of the gateways should be able to increase the data delivery to the gateways compared to this current proposed location of the gateways when using a similar network environment. | 4,559.8 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Physical-mechanical characterization of biodegradable Mg-3Si-HA composites
Purpose – Porous implant surface is shown to facilitate bone in-growth and cell attachment, improving overall osteointegration, while providing adequate mechanical integrity. Recently, biodegradable material possessing such superior properties has been the focus with an aim of revolutionizing implant’s design, material and performance. This paper aims to present a comprehensive investigation into the design and development of low elastic modulus porous biodegradable Mg-3Si-5HA composite by mechanical alloying and spark plasma sintering (MA-SPS) technique. Design/methodology/approach – This paper presents a comprehensive investigation into the design and development of low elastic modulus porous biodegradable Mg-3Si-5HA composite by MA-SPS technique. As the key alloying elements, HA powders with an appropriate proportion weight 5 and 10 are mixed with the base elemental magnesium (Mg) particles to form the composites of potentially variable porosity and mechanical property. The aim is to investigate the performance of the synthesized composites of Mg-3Si together with HA in terms of mechanical integrity hardness and Young’smoduli corrosion resistance and in-vitro bioactivity. Findings – Mechanical and surface characterization results indicate that alloying of Si leads to the formation of fine Mg2 Si eutectic dense structure, hence increasing hardness while reducing the ductility of the composite. On the other hand, the allying of HA in Mg-3Si matrix leads to the formation of structural porosity (5-13 per cent), thus resulting in low Young’s moduli. It is hypothesized that biocompatible phases formed within the composite enhanced the corrosion performance and bio-mechanical integrity of the composite. The degradation rate of Mg-3Si composite was reduced from 2.05 mm/year to 1.19 mm/year by the alloying of HA elements. Moreover, the fabricated composites showed an excellent bioactivity and offered a channel/interface to MG-63 cells for attachment, proliferation and differentiation. Originality/value – Overall, the findings suggest that the Mg-3Si-HA composite fabricated by MA and plasma sintering may be considered as a potential biodegradable material for orthopedic application.
Introduction
The increased demand of an artificial organ, hard tissue replacements and bone fixation devices led to the design and development of a wide range of new biomaterials. Commonly used yet successful are stainless steel, cobalt-chromium, titanium and its composites, which possess superior bio-mechanical properties and integrity (Geetha et al., 2009). However, it has been witnessed that their full potential use is not escalating and, often hindered because of the consequential drawbacks. For instance, first, the elastic modulus of these materials is much higher than that of the bone (10-45 GPa), which causes stress shielding (Prakash et al., 2016). As a result, the bone resorption occurs, leading to implant loosening and failure. Second, biomaterials are used as long-lasting bone fixation devices or implants. Spoerke et al. (2005) reported that after the bone healing, the implants must be removed from the body by a secondary surgical procedure, causing an increased healthcare cost, and mental stress toward the patients. In light of the issues around the existing hard metal implants, magnesium (Mg)-based composites are gaining a growing attention as a promising alternative for bio-inserts and bone fixation devices, because of their high biodegradability, superior biocompatibility and low elastic modulus nearing to that of the bone (Staiger et al., 2006). Uddin et al. (2015) reported that Mg was degraded very rapidly after implantation in the human body. Over the decades, numerous methodology and progressive techniques have been employed to regulate the deterioration rate in a way such that the implant provides adequate mechanical integrity until the complete bone healing (Uddin et al., 2015 andUddin et al., 2017). This has been a pressing challenge for biomedical engineers and material scientists, aiming to explore the potential solutions to produce implants with the controlled degradation ability. reported that the element alloying was found the most successful and potential method to control the degradation rate of Mg composites. Radha and Sreekanth (2017) critically conducted the review on the development of magnesium alloys, in which, elements, for example, Zn, Al, Ag, Y, Zr, Nd, Si, Mn, TiO 2 and Ca, were selected in the response of their biological function in the human body. While the element Al in Mg-composites improves mechanical properties, but Al þ ions released cause Alzheimer's disease and muscle fiber damage Gu et al., 2011;. Song (2007) reported that Zr causes very serious diseases such as lung cancer, liver cancer, and breast cancer. reported that the alloying of Nd and Y in Mg-alloy disrupt the growth of tissues around the implant. Li et al. (2008) observed that the alloying of Ca reduced the degradation rate and improved bio-mechanical integrity in a corrosive media. Moreover, Ca is a main and prime element of human bone, which stimulates the bone ingrowth process, thus accelerating the bone healing (Khanra et al., 2010). The alloying of Zn and Mn in the Mg matrix enhanced both elasticity and corrosion resistance (Gu et al., 2010). Recently, Ben-Hamu et al. (2007) reported that Si has proved to be an imperative element being alloyed to develop tissues. The developed Mg-Si composites showed low ductility, high strength and high corrosion resistance because of the presence of bigger Mg 2 Si particle and eutectic phases. In this regard, various manufacturing techniques, such as conventional sintering, have been used to fabricate Mg and its composites (Vahidgolpayegani et al., 2017). However, the shortcomings of the techniques are, for example, long sintering time and high temperature, which degrades mechanical and electrochemical properties, and thus, the fabricated devices have failed in long-term under cyclic loading. Spark plasma sintering (SPS) technique has been accounted for a novel and intense approach to fabricate porous compact with improved mechano-biological, antibacterial and corrosion performance. The SPS is a powder metallurgy, and the consolidation of powders by sintering uses a shorter holding time, a relatively lower sintering temperature and a high pressure at the rapid heating and cooling Mg-3Si-HA composites rates (>100°C/min). Table I presents a summary of the current literature on the fabrication of Mg-alloy using SPS. It is clear that many studies, in the past, reported on design, development and synthesis of Mg-alloy alloyed with Mn, and Zn using various fabrication techniques, with the aim of controlling the degradation rate. However, to the best knowledge of the authors, no research study is available, reporting on the combined effect of Si and HA addition in Mg-alloy on elastic modulus, corrosion resistance and bioactivity, as can be seen in Table I. To this end, the current paper aims to study the synthesis, characterization, corrosion and cell response of Mg-3Si-HA composite fabricated via MA-SPS technique. The key expectation is that the fabricated porous bio-alloy will exhibit improved bio-mechanical integrity while offering increased corrosion resistance to delay the degradation and bioactivity for bone fixation and orthopedic applications.
2. Material and method 2.1 Mechanical alloying and consolidation of spark plasma sintering High purity ($ 99.9 per cent) elements Mg, Si and HA were used to prepare Mg-3Si-HA composites. The chemical composition of the proposed composites in weight per cent is listed in Table II. Figure 1 shows the shape and size of powder particles before mechanical alloying (MA). The HA powder particles have an average particle size of 0.5 m m with irregular shape, whereas the other powder particles exhibited an average size of 25m m/s with spherical morphology. The required powders were weighed and MA was carried out by using high-energy planetary ball mill (Make: Fritsch, Pulverisette 7). The stainless steel vial with stainless steel balls (of diameter of 5 mm) was used. The powder mixture was alloyed for 12 h with a ball-to-powder ratio of 10:1 at a rotational speed of 300 rpm. The blended powders are first pre-heated at 200°C for 2 h in an argon gas atmosphere to dissipate the dampness and then solidified via SPS technique using a SPS-5000 machine (Model: Dr Sinter SPS-625, Fuji Electronic Industrial Co. Ltd., Japan). The SPS was carried out with a heating rate of 50 K/min (holding time 5 min) under vacuum conditions at sintering temperatures of 400°C and applied pressure of 40 MPa, as per the procedure followed by .
A graphite die was used for the sintering of powder's mixture and solid compacts of the diameter of 20 mm and thickness of 4 mm were synthesized. The objective of changing temperature and pressure was to investigate its effect on structural porosity and density. Figure 2(a) shows the schematic representation of SPS technique. During SPS process, thermal energy generated due to electrical sparks between the powder particles and the contact area causes partial melting of the grain boundary of powder while uniaxial applied pressure densifies the powder mixture [ Figure. 2(b)]. The process of densification and solidification forms the final sintered compact. Figure 2(c) shows the mass transformation during the SPS and the phenomena of partial diffusion and welding of powder particles . Sintering pressure enables to get rid of pores and induces an additional driving force for the compaction. On the other hand, high sintering temperature assists the powder particles to coalesce, which subsequently reduces the porosity and densifies the compact (Talò et al., 2017;Carnì et al., 2017;Ermakova and Dayyani, 2017;Fraddosio et al., 2017).
Microstructure and mechanical properties
The percentage of structural porosity was calculated by Archimedes method using water. First samples' mass was weighed and then immersed in the small beaker. The weight of the sample in water was measured, whereby the loss of weight of the sample when suspended in water was equal to the mass of fluid displaced, from which its volume and hence open porosity could be calculated using formula: Mg-Si(-Ca, Zn) alloy The addition of Ca and Zn to Mg-Si alloy improved the bio-corrosion resistance and showed very good biocompatibility.
In-vitro analysis revealed that excellent adhesion and growth of osteoblastic cell have been observed and invivo results suggested the alloy had good biocompatibility Liu et al. Pure-Mg The results indicated that the densification of pure Mg can be improved by the reduction of particle size, suggesting the intrinsic driving force, local pressure and current intensity were enhanced significantly by a decrease in the particle size at the same sintering conditions, which can promote shrinkage of pores, formation of the sintering neck and mass transportation in the SPS process Not studied (continued)
Mg-3Si-HA composites
To investigate the morphology and elemental composition, the samples were cut from the as-sintered compacts. The samples were finished with silicon carbide emery papers and well-polished up to surface roughness R a of 0.5 m m. Micro-structure, morphology, elemental and phase composition of the samples were investigated by field emission scanning electron microscopy (FE-SEM; JEOL 7600F), energy dispersive spectroscopy (EDS) and XRD technique, respectively. The Young's modulus is the basic characteristics of the biomaterial and is of the interest to determine mechano-biological stability. Thus, elastic modulus and hardness were determined via nanoindentation tests (Hyistron TI-950 indentation system) using the Oliver-Pharr method, as reported by Oliver and Pharr (1992). The Berkovich tip was used for the indentation with a maximum applied load of 1,000 m N intensity.
Corrosion resistance
The corrosion behavior and degradation rate were assessed via electrochemical potentiodynamic workstation (DC potentiostat/galvanostat model, Auto Lab PGSTAT30, Netherlands). To mimic the human body fluid condition, Ringer's solution as a simulated body fluid (SBF) was used as an electrolyte. The polarization behavior Not studied Table I. PRR 2,2 was monitored after 24-h immersion of the specimens in SBF with the scan rate of 0.001 Vs À1 . Three-electrode cell was used equipped with the specimen as a working electrode, the graphite rod as a counter electrode and Ag/AgCl saturated calomel as a reference electrode. The corrosion parameters were determined from the Tafel plot using Stern-Geary, as per ASTM standard G102-89, as per the procedure adopted by Prakash and Uddin (2017).
Degradation analysis in simulated body fluid immersion
The degradation rate of as-fabricated Mg-composites was assessed by immersion test in SBF solution for 3, 7 and 14 days. The samples were well polished and dipped into SBF solution in sterilized vials as per the ASTM-G31-72 standard. After predetermined time period of immersion, the samples were retrieved from glass vial, washed with distilled water and dried into desiccators for 24 h. The degradation rate of samples was evaluated by the mass loss in SBF solution due to Mg 2þ ion release. The degraded surface morphology of samples was analyzed using FE-SEM and ESD technique. Mg-3Si-HA composites
In-vitro bioactivity analysis
The bioactivity of the fabricated composites was assessed by in-vitro cell culture tests. The disc type test specimens of size f 5  3 mm were cut from the as-sintered composites and placed in a 96-well plate. The MG-63 human osteoblast-like cells were seeded at an initial density of 1  10 5 cells/cm 2 on to the samples and incubated at 37°C in a 5 Vol.% CO 2 atmosphere. After a predetermined time period of one, three and seven days, the cells were disengaged by trypsinization and the density was determined by hemocytometer under an optical microscope. Live and dead assays were performed to determine the cell viability on test specimens using an Olympus  51 fluorescence microscopy. The cell adhesion onto the surface was observed under FE-SEM after the fixation, following a protocol adopted by Prakash and Uddin (2017). The cell proliferation was evaluated by MTT assay for cell culture period of one, three and seven days. Cell differentiation was evaluated by performing an alkaline phosphatase (ALP) activity. Three samples were used in each group by reporting the results as PRR 2,2 means 6 standard deviations. One-way analysis of variance (ANOVA) was performed considering p < 0.05 statistically significant.
Results and discussion
3.1 Microstructure characterization The net weight of powder mixture before and after MA was approximately 10 gm, clearly indicating that there is no loss of powder. The change in particle size and morphology is a function of milling time. After milling for 8 h, the powder is reduced in size notably, and becomes irregular in shape. The reduction in grain size was because of plastic deformation and fragmentation in milling. After milling, the powder particles were completely homogenized, as can be seen in Figure. 3. Furthermore, it is observed that the powder particles were not defused with each other because of the localized temperature raised for a long time. Zheng et al. (2011) reported similar finding when Mg particles were milled with Al and Cu for more than 10 h. Figure 3 shows the SEM morphology and associated EDS spectrum of mechanically alloyed powder after 8 h. It can be clearly seen that the HA powder is adhered on the Mg powder surface and powder sample is completely homogenized. The EDS spectrum presents the all-alloying elements only; no contamination takes place. Figure 4 presents the EDS spectrum and digital image of the as-sintered composites. The macro-scale image of composites revealed that the samples were fully consolidated without any surface cracks or fracture. The high melting temperate of HA ($1,100°C) does not allow its particle to react/interact with the other alloying elements at lower sintering temperature of 400°C. The EDS spectrum of Mg-3Si composite revealed the presence of Mg, Si and O elements in structure, as can be seen in Figure 4(a). The EDS spectrum of Mg-3Si-5HA composite revealed the presence of Ca and P along with Mg, Si and O elements in structure, as can be seen in Figure 4(b). The peaks intensity ration of Ca and P is 1.69, which is desirable in the composition. Similarly, Figure 4(c) presents the EDS spectrum of Mg-3S-10HA composite. The peaks intensity of Ca and P is high as compared to Mg-3Si-5HA composite and their ration is still 1.69. Figure 4(d-f) shows the SEM micrograph of the transverse cross-section of all sintered composite compacts. Open pore structure can be clearly seen from the micrographs. During SPS processing, a large amount of gases was discharged because of heat generation, which results in the formation of porosity. Moreover, SEM micrographs indicate the formation of a distinct structure within the composites -dark, grey and bright phases in Mg matrix. The dark phase is a-Mg primary grain boundary, grey represents the Si element and bright phase represents the HA elements. The Mg-3Si composite compact has less structural porosity (3-5 per cent). The Mg-3Si-5HA composite has higher degree of structural porosity (8-10 per cent) than Mg-3Si composite has, whereas, Mg-3Si-10HA composite has highest degree of structural porosity than Mg-3Si-5HA and Mg-3Si composites have. As a consequence, the porosity generation results into the reduction of density of the compact. Correspondingly, the theoretical density of all sintered composites was calculated by the rule of mixture. Table III shows the theoretical and measured density of the composites. It can be clearly seen that Mg-3Si composite sample has porosity 3.5 per cent only, which reveals that sample is fully sintered and densified, whereas, Mg-3Si-5HA composite sample has 8.53 per cent porosity. The porosity increases with the increase in HA weight percentage. Table II also presents the percentage of porosity generated in the composite structure. The relative density was measured by mass per unit volume and obtained in the range of 87-97 per cent. The density decreases with the increase in HA weight percentage. Figure 5 shows the XRD pattern of the sintered composites. It can be clearly seen that Mg-3Si-HA composites have same pattern, although their relative intensity of HA changes with HA content in the composite. Mg-3Si-HA shows the formation of MgCaO, CaMgSi and Mg 2 Si phases, whereas Mg-3Si composite shows the formation of Mg 2 Si phases. Mg 2 Si phases are expected to enhance corrosion resistance, whereas CaMgSi and MgCaO are beneficial for the apatite growth and improve the bioactivity. Figure 6 shows the distinctive loading/unloading plot and indent image for the all sintered compacts. As depicted from the plots , the slope of the unloading curves of Mg-3Si composite sample is less than the Mg-3Si-5HA and Mg-3Si-10HA composite samples. This means that Mg-3Si composite samples have less penetration among all three samples, which indicates that samples have high hardness. Correspondingly, the Mg-3Si composite sample has high elastic modulus. This is because the reinforcement of Si element and Si element has high Mg-3Si-HA composites hardness, which increases the brittleness of the compact. As a result, high hardness and elastic moduli were obtained. On the other hand, the slope of the unloading curves of Mg-3Si-10HA composite sample is high in compression to Mg-3Si-5HA and Mg-3Si composite samples. This revealed that the Mg-3Si-10HA composite sample has highest penetration, Figure 5. X-ray diffraction patterns of sintered compacts Figure 6. Loading-unloading depth profile and indentation impression on sintered composites PRR 2,2 which indicates that the hardness of sample is less than others. This is because the reinforcement of HA element and HA element creates porosity in the structure, which further reduces the mechanical properties of compact. The Mg-3Si-10HA composite sample has low hardness and as a result of this low elastic moduli of approximately 32 GPa (near to bone) were obtained. It is evident from the indented image that the Mg-3Si-10HA composite sample has a low hardness, as the size of the indent is small as compared to the indent size of the Mg-3Si-5HA and Mg-3Si composite samples. Table IV presents mechanical properties of all three composite composites.
Electrochemical measurements
In-vitro corrosion evaluation of the as-fabricated composites was conducted via potentiodynamic polarization in Ringer's solution (SBF). Figure 7 shows the Tafel polarization graph of three composite samples fabricated at sintering pressure of 40 MPa and temperature of 400 K. Table V outlined the corrosion parameterspotential (E corr ), corrosion current density (I corr ), polarization resistance (R p ) and corrosion rate (C R ). The results showed the E corr in the case of Mg-3Si composite was about À1.27 mV and relative I corr was 125 m A/cm À2 , caused active degradation of the substrate. The corrosion rate for Mg-3Si was measured to be about 2.05 mm/year. The developed passive layer was less defensive and precarious and thus the composite degraded rapidly. Comparatively, it has been found that the corrosion potential of the Mg-3Si-5HA and Mg-3Si-10HA composite specimens is À1.21 and À1.25, respectively. Furthermore, the hyperbolic curve for the asfabricated Mg-3Si-5HA and Mg-3Si-10HA composite specimens was shifted toward a lower and Mg-3Si-10HA composites in SBF (Ringer's Solution) at (37 6 1)°C Mg-3Si-HA composites current density side and corresponding current density was measured approximately 75 and 98 m A/cm À2 . In Mg-3Si composites, Si reinforcement particles helped in the formation of MgSi 2 , a corrosion resistive phase, as reported elsewhere []. In Mg-3Si-HA, the alloying of HA and Si has resulted into formation of corrosion barriers phases (CaMg, MgSi 2 and CaMgSi). In the case of as-sintered Mg-3Si-HA composite, the corrosion resistance was found to be highest as the dual alloying of HA and Si produced desirable. With the alloying of HA in Mg-3Si composite, the degradation rate of was reduced by 41.95 per cent times from 2.05 mm/year to 1.19 mm/year. Figure 8 presents the degradation rate of as-fabricated Mg-composites in SBF solution for 3, 7, 14, 28 and 56 days. Initially, the degradation rate of Mg-3Si composite is high as compared to Mg-3Si-HA composites samples, but after 28 days of immersion, it becomes approximately contact till 56 days. The Mg-3Si-5HA composite samples have less degradation rate and high apatite growth level as compared to the Mg-3Si composite samples. The high apatite growth level reduced the degradation rate of the Mg-3Si-5HA composite samples. The weight percentage of HA and density of as-fabricated composite greatly affect the growth of apatite layer formation. When 10 per cent HA was used as reinforcement, the rate of mass deposition of apatite layer was high with high apatite growth level as compared to the Mg-3Si-5HA composite samples. But the apatite layer was shredded because of being highly porous in nature and degradation took place in the form of pulverized fine particles. Figure 9 presents the degraded surface morphology of the asfabricated composites after immersed in SBF solution for 28 days. The Mg-3Si composite Mg-3Si-HA composites helps to form a thick layer of apatite on the composite, which further prevents the surface form degradation/corrosion. The apatite layer formation on the Mg-3Si-5HA composite was higher than on Mg-3Si-10HA. T The associated EDS spectrum confirmed the presence of Ca, P, O and Mg on the degradation surface, which revealed the formation of apatite growth on the composite samples. The peak intensity of Ca and P is higher than the Mg, which further revealed the formation of apatite layer on the surface. The apatite layer formed on Mg-3Si-10HA was high porous and detached in the form of pulverized particles, which degraded the corrosion resistance and degraded the composite. Very big open holes due to release of H 2 gas and Mg 2þ ions can be clearly seen on the surface. Figure 10(a), (c) and (e) shows the SEM micrograph of the attached cells. It has been clearly seen that the cells have polygonal shape and start to proliferate after 24 h. Biological activities like cytoplasmic extensions, the accusation of filopodia and retraction of ECM activity indicating the cell spreading on all specimens' surfaces were clearly observed. Figure 10(b), (d) and (f) shows the fluorescent images of the live cells of MG-63 on the specimen surfaces. In all cases, the surfaces exhibited a significant amount of MG-63 live cells (green), and it can be said that the composites are suitable for the growth of osteoblastlike cells. It was observed that Mg-3Si-5HA surface exhibits higher density of live cells as compared to Mg-3Si and Mg-3Si-10HA. It appears that the polygonal morphology with filopodias of the attached cells established sound mechanical anchorage with the composite surface. The formation of these biological activities on the composite surface revealed that the adhesion and proliferation of Mg-63 cells are supportive and active. Hence, the assintered composites are favorable for the growth, proliferation, metabolic activities and differentiation of osteoblastic cells. Figure 11 shows the cell viability assessed by various assays: cell counting, MTT assay and DNA content evaluation. Figure 11(a) shows the number of cells on the as-sintered composites specimens. The as-sintered Mg-3Si-5HA composite specimens showed higher number of cells in comparison to Mg-3Si and Mg-3Si-10HA composite specimens. Evidentially, it was seen that the density of cell increased by increasing the incubation time for all types of specimens, which further enhanced the degree of cell attachment on the surface. Figure 11(b) illustrates the observations of cell proliferation of MG-63 osteoblast-like cells on Mg-3Si, Mg-3Si-10HA and Mg-3Si-5HA specimen according to the MTT assay. The Mg-3Si-5HA composite specimens have better cell proliferation rate. This is ascribed because of the presence of Si and HA element along with high-degree of porosity in structure. It was reported that Si and HA enhanced the bone formation process. Moreover, the structural porosities created super-hydrophobic surface, which enhanced the surface energy and bioactivity of the composite. This leads to stimulate protein absorption resulting in enhancement of cell attachment and proliferation. Figure 11(c) shows the DNA content by Mg-63 cells grown on all composite specimens. The results revealed that the DNA content was proportional to cell proliferation on all types of specimen's surfaces. Higher degrees of cell attachment and proliferation led to a higher DNA content. The Mg-3Si-5HA composite has high content of DNA. The differentiation of Mg-63 cells was evaluated using ALP activity at one day, three days and seven days on all types of specimen, as shown in Figure 11(d). It can be clearly observed that the ALP activity increased significantly as the culture time period increased. A significantly higher level of ALP activity by MG-63 cells was observed on Mg-3Si-5HA composite specimens as compared to Mg-3Si and Mg-3Si-10HA specimen after seven days of growth. PRR 2,2 4. Conclusions Potential application of MA-SPS technique was considered for the design and development of new low elastic porous Mg-3Si-HA composites with improved mechanical integrity, corrosion resistance properties and biocompatibility. The following conclusions were drawn from the present study: The in-vitro bioactivity results indicated that the Mg-3Si-5HA composite had excellent biocompatibility and promotes cell adhesion, growth, proliferation and differentiation.
In-vitro biocompatibility assessments
Moreover, the combination of low elastic modulus, high corrosion resistance and enhanced bioactivity might make porous Mg-3Si-HA composites prepared by MA-SPS a promising candidate for orthopedic applications as screw, plates and bio-inserts. The future work may focus on the control of the pore size, consistency and development of customized architectures to fulfill a wide range of applications. Along with this, clinical trials are also necessary for statistical analysis of in-vivo results to meet up all the claims. Future directions of the present research will deal with studies on the mechanical modeling and the employment of biodegradable Mg-3Si-5HA composites for the manufacturing of a wide range of multiscale composite materials and structures with arbitrary geometry at different scales Mosallam and Nasr, 2017). | 6,225.8 | 2018-08-30T00:00:00.000 | [
"Materials Science",
"Engineering",
"Medicine"
] |
Geographic distribution of Hemigrammus ora ( Ostariophysi : Characiformes : Characidae ) in the Amazon basin , Brazil
New records of Hemigrammus ora Zarske, Le Bail & Géry, 2006, previously believed to be endemic to the French Guiana drainages, are confirmed for Brazilian drainages. The species is reported from the Tocantins-Araguaia system and lower Amazon River. Morphometric and meristic data, previously undescribed morphological traits related to caudal-fin squamation and anal-fin hooks, and geographic variation are presented for the species.
In a recent expedition to the headwater streams of the Araguaia River in the states of Mato Grosso and Goiás, Brazil, specimens of Hemigrammus were collected, and after of examination, they were identified as Hemigrammus ora Zarske, Le Bail & Géry, 2006.Hemigrammus ora is a small characid species originally described from low land rivers in the French Guiana.Material deposited in scientific collections, identified as Hemigrammus sp., has revealed that the species was previously collected in some other localities in the Araguaia-Tocantins River basin and other tributaries of the lower Amazon basin.
In this study, meristic and morphometric data were obtained following FINK & WEITZMAN (1974), with the addition of the following features: head depth, measured at the vertical through the posterior tip of the supraoccipital process; dorsalfin base length; longest anal-fin ray length; anal-fin base length; and dorsal to adipose fin distance, measured from the posterior end of dorsal-fin base to origin of the adipose fin.The measurements, snout and upper jaw lengths, were also taken according to the methodology of GÉRY (1972), following ZARSKE et al. (2006), herein emphasized by an asterisk (*).Measurements were per-formed with a caliper to the nearest 0.05 mm on the left side of the specimens.Body measurements are presented as percents of standard length (SL), and measurements on the head are presented as percents of head length (HL) and standard length, for comparison with the original description of H. ora.Counts of supraneurals, gill rakers on the first branchial arch, branchiostegal rays, vertebrae, and procurrent caudal-fin rays were taken on five cleared and stained (c&s) specimens prepared according to other Hemigrammus species (ZARSKE et al. 2006).Additionally, ZARSKE et al. (2006) quoted as diagnostic of H. ora the arrangement of the cusps of the premaxillary teeth of the inner row, in a crescent line; 21 to 24 branched anal-fin rays; 32 to 33 lateral line scales; 10 to 15 perforated scales on the lateral line; body depth 2.94 to 3.44 in SL; and head length 3.42 to 3.96 in SL.
The toothless maxilla found in H. ora is an uncommon condition among the species of Hemigrammus, as well as its shape (not illustrated in the original description) (Fig. 5).Most species of Hemigrammus have at least one maxillary tooth, and the maxilla is mostly flat along its axis, with a short anterior cylindrical rod-like process medially directed, which is connected through ligaments to the premaxilla and lateral process of the mesethmoid.In H. ora, the maxilla is proportionally reduced in length when compared with any other species of Hemigrammus.It is cylindrical along most of its axis, and only the posterior portion is lamellar.Hemigrammus ora has scales on the caudal fin covering one-third of the dorsal lobe and half length to two-thirds of the lower lobe, a condition found in most species of Hemigrammus (see CARVALHO et al. 2010, fig. 3b).This feature was not mentioned in the original description, but the presence of caudal fin scales can be noticed in the picture of the paratype (ZARSKE et al. 2006: 20, fig. 1).Morphometric and meristic data of H. ora specimens from the Tocantins-Araguaia Rivers, low Amazon River, and Xingu River basins are presented in Tables I and II.Measurements and counts from these specimens overlap the diagnostic and descriptive data presented in the original description of H. ora.The only discrepant value was the snout length* 7.7-10.6%SL (versus 3.9-7.0%SL).Among the counts, the only information that diverges from the original description is the number of small dentary teeth, posterior to the first four large teeth.Instead of eight to ten small conical teeth, the five cleared and stained specimens presented three small teeth, and in some instances, the tooth just after the first four was tricuspid.
Regarding the presence of hooks on the anal-fin rays, ZARSKE et al. (2006: 21) briefly mentioned the absence of "hooklets on the first rays of anal fin" of H. ora.LIMA & SOUSA (2009) remarked that they were uncertain about the presence of anal-fin hooks in H. ora, due to the fact that their descriptions were based on a few specimens.Herein, the presence of hooks was evidenced on the anal fin of males.Some (up to 28.7 mm SL) have small bony hooks on all branched pelvic-fin rays, and along the lengths of the last unbranched to the eighteenth branched anal-fin rays.The hooks are small, thin, dorsally arched, and their number varies from seven, on the anteriormost ray, to one, on the posteriormost hook-bearing ray.The size and distribution of hooks on the anal-fin rays of H. ora, differ from the pattern described by LIMA & SOUSA (2009) to their more restrict "Hemigrammus ocellifer species group".LIMA & SOUSA (2009) characterized in this group based on the presence of a single medium-sized hook per anal-fin ray, distributed in the same height of each ray, from the last unbranched to the sixth to seventh branched anal-fin rays.However, the distribution and morphology of the anal-fin hooks in H. ora resembles the pattern that LIMA & SOUSA (2009: fig.6) assigned to Hemigrammus schmardae (Steindachner, 1882).Despite the anal-fin hooks, no other sexual dimorphic feature was found in H. ora.No gill glands were found on the macroscopic examination of the first gill arch of mature male specimens (BURNS & WEITZMAN 1996).ZOOLOGIA 28 (4): 545-550, August, 2011 The specimens of H. ora herein examined present two main morphological differences when compared with the type specimens: a lower number of small dentary teeth posterior to the first four large teeth (3 versus 8-10); and a longer snout (7.7-10.6 versus 3.9-7.0%SL*).Additional material of H. ora from the French Guiana was not found in fish collections.Therefore, we were not able to provide a more extensive investigation concerning those differences.In spite of that, the specimens from Brazil fit the diagnosis elaborated by ZARSKE et al. (2006) based on specimens from the French Guiana, with which they share the color pattern (with distinct humeral and caudal spots), and the morphology of the maxilla, features absent from other species of Hemigrammus.Therefore, we consider the previously mentioned differences between the specimens from Brazil and French Guiana as corresponding to geographic variation.
Hemigrammus ora was described from specimens from the Pripri Yiyi River, a coastal drainage from French Guiana.It was also recorded for the Sinnamary basin based on the geographical range presented by PLANQUETTE et al. (1996) for Hemigrammus aff.schmardae, a misidentification of H. ora (according to ZARSKE et al. 2006).The species was considered to be putatively endemic to those drainages.The geographic distribution of Hemigrammus ora is herein extended to the lower Amazon tributaries, upper Xingu and Tocantins-Araguaia Rivers basins (Fig. 6).Based on these new records, we hypothesize that the species has a continuous distribution from its type locality in French Guiana to the lower Amazon River tributaries, until the upper Tocantins-Araguaia River basin.This distribution pattern, from the French Guiana low land rivers to the lower Amazon River and some Brazilian Shield rivers, is observed in other freshwater species, such as Acnodon spp.
Figure 6 .
Figure 6.New records of H. ora (dots) in Brazil.The type locality of H. ora is represented by a star in French Guiana.Each dot may represent more than one locality.
Table I .
Morphometric data of H. ora, Amazon system, Brazil.Measures taken according to GÉRY (1972) are followed by an (*).
Table II .
Meristic data of H. ora, Amazon system, Brazil. | 1,803.2 | 2011-08-15T00:00:00.000 | [
"Biology"
] |
Research on Spectrum Needs Prediction Method for HAPS as IMT Base Station
The High-Altitude Platform Station (HAPS) is an integral component in Non-Terrestrial Network (NTN) which is potential 6G technology. The HAPS as IMT Base Stations (HIBS) in the mobile service is an emerging and promising approach to achieve vertical heterogeneous network. Therefore, some efforts have been made in order to harmonize the worldwide usage of HIBS, including investigations of the spectrum needs, usage, and technical and operational characteristics of HIBS, among which the spectrum needs has drawn particular attention because it should be determined in advance prior to the commercial deployment of HIBS. However, there are very few investigations devoted to the prediction of the spectrum needs for future HIBS deployments currently due to some tough difficulties. Firstly, there has no actual deployment precedent of HIBS so that the future capacity demands are not clear. Secondly, both the networking and its performance of HIBS deployment are questions to be answered. Therefore, taking into account the study under WRC-23 Agenda Item 1.4 HIBS-CHARACTERISTICS, this paper proposes an evaluation method of the spectrum needs prediction for HIBS deployed as a supplementary for grounded base station, and presents the results of spectrum needs for HIBS in certain frequency bands identified for IMT. The capacity demand of HIBS in 2025 is predicted and the spectral efficiency of HIBS system is obtained by simulation, and the spectrum needs is defined as Capacity Demand/Spectral Efficiency. The main contributions of this paper are as follows: Firstly, to obtain the spectral efficiency of HIBS systems, we set up a simulation model with 7-cell networking in accordance with configuration parameters of WRC-23 AI 1.4, which equipped with beamforming antenna to improve spectral efficiency. And calculating the CDF curve of spectral efficiency by using Monte Carlo method allows the simulation result to more closely match the actual deployment. Secondly, in order to solve the problem of no actual HIBS traffic data, we propose a method based on the methodology provided in Recommendation ITU-R M.1768-1 to simulate the capacity demand, and then to predict the capacity demand of HIBS in 2025 by polynomial fitting. Finally, we have predicted the spectrum needs by calculating Capacity Demand/Spectral Efficiency. The spectrum demand range of HIBS for China rural area in 2025 predicted by the proposed method is from 80.30 MHz to 122.97 MHz. This method can effectively predict the spectrum needs of HIBS for various deployment scenarios, which plays an important role in driving the commercial deployment of HIBS.
I. INTRODUCTION
Aim to realize the vision of 6G three-layer vertical heterogeneous network, the NTN is one of the promising 6G The associate editor coordinating the review of this manuscript and approving it for publication was Bilal Khawaja . technology discussed widely, which is defined in 3GPP Technical Report 38.811 [1]. The important components of NTN include satellite system, HAPS and UAV etc. Since HAPS have larger coverage and capacity than UAV, as well as satellite system experience high path loss and higher delay than HAPS [2], [3], the use of HAPS as IMT base station VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ is an emerging and significant method to enhance mobile network. The HAPS is a network node that operates at the altitude of 20-50 km above the ground, and can stay at quasistationary [4]. HAPS related work can be traced back to 1990s [5], which is showed promise for the advantage of coverage and deployment convenience, with a particularly on emergency relief and remote areas. However, because of the limitation of aircraft load capacity as well as energy, HAPS has not been commercially available so far. Due to the development of the technology of battery, solar panel [6], [7], lightweight materials and autonomous avionics in the last decade [8], HIBS\HAPS is widely considered become more economically feasible in the 6G era. The commercial and research project deployments instance of HAPS systems was introduced in [9], and a summary of past field trials and experimental studies along with open technical issues as well as current application was given in [10] and [11]. And some literatures have discussed the role of HAPS in vertical NTN. The use of HAPS as either a stand-alone or complementary of the terrestrial network was discussed in books [12], [13]. Reference [14] demonstrated the integration of the satellite systems and HAPS, bridging the wide gap between the terrestrial and satellite communication systems. Moreover, the basic issues for HAPS should regain attention. The channel models for HAPS of SISO and MIMO were demonstrated in [15], and propagation modeling was discussed in [16], [17], and [18]. And in recent years, some fresh views and research on the applications and technologies of HAPS are given in [19] and [20]. The promising research directions about communication and computation of HAPS in the next generation are introduced in [19], such as HAPS mounted Super Macro Base Station (SMBS), use of Reconfigurable Intelligent Surface (RIS) in the communications payload of HAPS, Radio Resource Management (RRM) and interference management of HAPS, AI and Machine Learning (ML) in HAPS and so on. And the [20] positions HAPS in the era of 6G by various applications for large scale communications, computation offloading, intelligent relaying, and distributed ML.
Although HAPS is one of the most promising 6G technology, harmonizing the worldwide usage have to pay more attention to regulations of aviation and spectrum. The regulation activities are limited by the ITU-R and International Civil Aviation Organization (ICAO). ITU-R regulates spectrum aspects related to HAPS while ICAO governs the aviation safety and activities of HAPS [21]. ICAO defines two distinct classes of HAPS, one is unmanned free balloons and the other is the unmanned aircraft. The difference between the two is that balloons are excluded from real-time management [22]. Regulatory guidance is still developing, which will affect local market choices [22]. As for spectrum aspects, the World Radiocommunication Conference in 2019 (WRC-19) has revised the spectrum regulatory framework for HAPS. The allocations dedicated for HAPS can be found in [23], [24], and [25]. ITU-R provides flexibility of spectrum usage for HAPS while ensuring protection of the existing services, therefore a large number of sharing and compatibility studies have been carried out [23], [24], [25].
HAPS has great potential in various application of the next generation, however promoting HAPS as IMT base stations (HIBS) in the mobile service is the first thing to be done. Current technologies could enable HIBS to provide low latency and broadband mobile connectivity in rural, remote area and underserved area, over a large geographic footprint, as well as using the same frequency to support and complement the IMT-grounded network in urban. Therefore, WRC-19 adopted Resolution 247 and has resolved to identify Agenda Item 1.4 of WRC-23: Study the use of highaltitude platform stations as IMT base stations (HIBS) in the mobile service in certain frequency bands below 2.7 GHz already identified for IMT [26].
Determining spectrum needs are a precondition for commercial deployment. The spectrum needs dependent on a number of factors, such as specific system characteristics and deployment scenarios. Based on WRC23 Agenda Item 1.4, while taking into account sharing and compatibility with existing service, this paper outlines technical and operational characteristics, as well as corresponding usage and deployment scenarios for HIBS in mobile service in certain bands below 2.7 GHz already identified for IMT, and then proposes an evaluation method to predict the spectrum needs of HIBS in specific deployment scenario in 2025 years. The paper is organized as follows: Section II descripts the system model of HIBS, including the system architecture, network topology and antenna model, and gives the simulation parameters; Section III presents an algorithm to simulate the HIBS's spectral efficiency based on Monte Carlo system simulation; Section IV gives a method to predict the capacity demand of HIBS in 2025, as well as the calculate result of spectrum needs. A conclusion is given in Section V.
II. SYSTEM MODEL AND CONFIGURATION
Since the application scenarios determine the characteristics of the communication system used, spectrum needs are affected by the specific usage and deployment scenario. The technical characteristics given in this section are aimed at case that HIBS is deployed to complement for the grounded IMT networks in unserved areas, extending the coverage, providing connectivity and applications including emergency responses, disaster relief and sensor networks.
A. SYSTEM ARCHITECTURE Figure 1 shows a system architecture diagram of HIBS used in this paper's deployment scenario. The HAPS equips 7 Active Antenna Units (AAU) to provide 7 service cells, and the service link and gateway link make up the HIBS network.
Service link communicates between HIBS and user equipment (UE) utilizing frequency which is already identified for IMT. Since UE has the low transmit power and the omni-directional antenna (which gain is low), HIBS require high gain antenna to transmit and receive signals appropriately. Thus, the MIMO antenna is equipped to provide connectivity over a wide area. Moreover, beamforming and mechanical tilt will be adopted to overcome the unstable of air-carrier, which is implemented to ensure stable connectivity.
Gateway link provides a backhaul connection between the HIBS and core network via the dedicated ground station.
B. NETWORK TOPOLOGY
Since HAPS equipped 7 AAUs in the deployment scenario of this paper, a HIBS area can be divided into 7 cells with multibeam configurations as shown in the Figure 2. The cell at the center of the HIBS area is 1 st layer cell, and the cells on the outside arranged at equal horizontal angle interval (60 degree) are defined as 2 nd layer cell. The reason why HIBS has such a cell topology is because of the arrangement of AAUs on the HAPS, related parameters are illustrated in section II.D.
The networking topology for a cluster of HIBS deployed in wider area is shown in Figure 3. There are 7 HIBS areas and each area covered by single HIBS. HIBS area Radius is defined as A while distance between HIBS referred to inter-HIBS distance is defined as B. Moreover, B can be calculated by A as B = √ 3 A. The system simulation of this paper is utilizing the topology structure of Figure 3.
C. ANTENNA MODEL
The antenna (AAU) equipped on the HAPS has the ability of beamforming, which should be consider in this simulation. The muti-beam scheme will affect the calculation of signal and interference. As shown in Figure 4, the beamforming antenna is based on an antenna array and consists of a number of identical radiating elements located in the same plane with a fixed separation distance. The radiation patterns of all elements pointing along the x-axis, and system using an AAU will actively control all individual signals being fed to individual antenna elements in the antenna array aim to shape and direct the antenna pattern to a wanted shape.
The radiation elements of antenna model illustrated in the Figure 4 are placed uniformly on the z-O-y vertical plane in the cartesian coordinate system. The x-O-y plane denotes the horizontal plane. The elevation angle of the signal direction is denoted as θ which defined between 0 • and 180 • , with 90 • representing perpendicular angle to the array antenna aperture. The azimuth angle is denoted as which defined between −180 • and 180 • . The antenna array model is determined by element pattern, array factor and signals applied to the array system. The VOLUME 10, 2022 single element pattern is calculated by (1)-(3) [27]: The horizontal and vertical radiation pattern of element is calculated by (1) and (2) respectively, where φ 3dB and θ 3dB are 3dB bandwidth of horizontal and vertical, A m and SLA v are the front-to-back ratio. The element pattern calculated by (3) illustrates composite antenna beamforming pattern, which is the sum of element gain and array gain: The second item in (4) is logarithmic sum of the array gain, where w i,n,m is a weighting function used to direct the beam in various directions, and v n,m is the super position vector. These two factors are related to θ, ϕ and element spacing, the specific calculation is given in [27]. N H and N V is the number of elements along the horizontal and vertical respectively. The composite pattern should be used where the array serves one or more UEs with one or more beams, with each beam indicated by the parameter i.
D. CONFIGURATION PARAMETERS
This paper will carry out system simulation on 3 frequency bands, which are the largest service volume of Chinese operators at present. The Table 1 Note2 demonstrates the deployment related configuration parameters of HIBS and the UE served by HIBS.
III. SIMULATION OF SPECTRAL EFFICIENCY
In this section, we use Monte Carlo experimental method to perform system level simulation to calculate spectral efficiency. We randomly distribute users in the network topology shown in Figure 3 and calculate the spectral efficiency of each user point, which process is called a ''snapshot''. After taking multiple snapshots, we will obtain the spectral efficiency of a large number of points, which covers almost all positions in the topological network. The average value can be regarded as the spectral efficiency of the system.
A. USER ACCESS METHOD
Firstly, it is necessary to consider which cell the UE accesses. The UE is distributed in HIBS area randomly, we assume the user density is 3 UEs/cell (shown in Table 1). The Figure 2 is an instance for the users' distribution of a snapshot, and the Figure 3 is the networking topology in the simulation (7 HIBS areas, and 7 cells/HIBS). The flowchart of access process for a single HIBS is illustrated in Figure 5.
The specific creation methodology of the access process in the simulation is as follows: Step 1: UE is randomly deployed in a circle with the HIBS i (i ∈ [1,7]) as the centre and the HIBS area radius as the radius.
Step 2: for each UE, calculating the transmit power P tx , the antenna gain G tx and the total Loss (includes propagation loss, ohmic loss, body loss and penetration loss, etc.) from each HIBS cell, and calculating the signal S from cell j (j ∈ [1, 49]) by (5) Note1 : Step 3: selecting the HIBS cell j to make the S maximum.
Step 4: if the cell j belongs to the HIBS i , execute the Step 5; Otherwise, delete the UE and regenerate UE to repeat the Step 1 to Step 4.
Step 5: if the number of UE in cell j < 3, keep UE belongs to cell j ; Otherwise, delete the UE and regenerate UE to repeat the Step 1 to Step 4.
Step 6: iterate the HIBS i from i = 1 to 7.
The algorithm complexity of UE access is O(nj 2 ), where n is the number of snapshots and j is the number of cells.
Note 1: S j is calculated by 4 items in (5). G UE is fixed, and P tx,j has two + values which is depended on whether it is power of 1 st layer cell or 2 nd layer cell, the parameters of two items can be find in Table 1. Loss j is calculated by propagation loss ProL and other loss (includes ohmic loss, body loss and penetration loss. Other loss is fixed in the simulation, and pathloss adopt propagation model by (6) [28], where r is the distance between the base station on HAPS and UE.
ProL = 20log 10 f + 20log 10 r + 32.44(dB) As for G tx,j , the antenna model use the model in Section II.C, the maximum gain direction of antenna is pointing the centre of each cell. The G tx,j is depending on the θ and ϕ of the antenna which are determined by coordinates of centre of cell j , UE and HIBS i .
B. DOWNLINK SPECTRAL EFFICIENCY
In order to calculate the spectral efficiency of a randomly distributed UE, it is necessary to calculate the SINR of the UE. For downlink, the UE in the topology network of the simulation will receive signals from antenna of 49 cells (7×7, 7 HIBS areas and 7 cells/HIBS), denoted as S j (j ∈ [1,49]). Assume that, while j = k, signal has the maximum value S max = S k , which means that the UE accesses cell k and S k is the wanted signal. Thus, S j | j ∈ [1, 49] ∩ j = k is the interference signal, denote S j asI j (j = k). I j represents that UE receive the interference of cell j , therefore the aggregate interference I sum is calculated by (7), and SINR of the UE is calculated by (8), which unit is dB.
The N is the noise, it depends on thermal noise of UE for downlink, and takes the typical value of −105dB in the simulation. According to the SINR, the spectral efficiency f se of the UE at that coordinate point under the HIBS deployed in this paper can be calculated by (9), which is derived by Shannon bound.
The α is attenuation factor, representing implementation losses. SINR min and SINR max are the boundary of SINR of the code set. The parameters α, SINR min and SINR max can be chosen to represent different modem implementations and link conditions, which are demonstrated in Table 2 Note2 for implementation of this paper. Note 2: The parameters in Table 1 and Table 2 are the parameters determined in the ITU-R AI 1.4 PDNR (Preliminary Draft New Report) after discussion by delegates of each country. The other contents in the PDNR need to be updated in several meetings for discussion before the official ITU report can be formed for the public. Therefore, there is no specific public reference for these parameters.
Through (7), (8) and (9) we can obtain spectral efficiency of one point in the HIBS coverage, using this calculation multiple times can obtain the spectral efficiency of all points in a snapshot. By taking multiple snapshots, we can get the uniform distribution of UE in HIBS coverage and a large number of spectral efficiency datasets of coordinate points corresponding to UEs. The flowchart of downlink simulation method is illustrated as Figure 6.
The algorithm complexity of DL calculation is O(nj 2 ), where n is the number of snapshots and j is the number of cells. The specific steps will not be repeated here, however there are several points to be explained. The HIBS 1 in Figure 6 is the center of the topology network shown in the Figure 3. Since it is a typical networking situation that HIBS 1 is interfered by surrounding HIBS, we only investigate the spectral efficiency point distribution of HIBS 1 . The dataset saves the SE, SINR, S, and I of points belonging to HIBS 1 in all snapshots, and statistical analysis of the dataset will reveal the results of downlink spectral efficiency.
C. UPLINK SPECTRAL EFFICIENCY
Since SE of uplink is constrained by interference from other UEs, power control is necessary considered, which can be combined with frequency-domain resource allocation strategies to enhance cell edge performance as well as improve SE [27]. The algorithm of power control for single UE is as (10), (11), (12), where P a is the power of UE for allocating the maximum resource block (RB) M max , P 0pusch is power per RB, α is balancing factor for UEs with bad channel and good channel; PathL is pathloss including propagation loss, body loss, ohmic loss, and gain of HIBS and UE, G HIBS is determined by the antenna model illustrated in section II; P UE is the UE's power, P cmax is the maximum power of UE.
And the number of allocated RB M UE of the power control is: The related parameters of power control in the simulation are as follows: M max is 105 RB, P 0pusch is −92.2 dBm, α is 0.8. Others can be found in Table 1. However, the UEs in the same cell need to cooperative schedule, the M UE should modify according to the other 2 UE in the same cell. Therefore, the actual used RB of the UE is: The M UEp and M UEq is the allocated RB calculated by (13) of the other 2 UE in the same cell. So far, we have completed the resource allocation and power calculation of a single UE under the power control algorithm. It is worth noting that the different UE have the different power spectral density, so it's better to calculate S, I, SINR for uplink by using power per MHz. Thus, the uplink signal power density of the jth user served in the ith cell (UE i,j ) denoted as P i,j is (15) Note 3: There are some explanations and analysis for (16)- (19). The (16) is similar as (5) for downlink, the transmit power is replaced by the power density of UE i,j ; G i i,j is the gain of antenna of cell i which is calculated by antenna model; Loss i i,j is the loss from the coordinate point of UE i,j to cell i , which includes ohmic loss, body loss, penetration loss, and propagation calculated by (6). The (17) describe interference at received antenna of cell i from the other UEs served in different cell with UE i,j to cell i ; The second item in (17) is a weighting factor, in order to obtain the average interference of different UEs in the same neighbor-cell over the total bandwidth. The (18) is the calculation of aggregate interference, which is the sum of the signal from 144 UEs except the 3 UEs in the cell i . The (19) describe the uplink SINR of the UE i,j , the N is the noise, it depends on thermal noise of HIBS which takes the typical value of −109dB.
Substituting the uplink SINR into (9), we can get the uplink SE of the UE i,j at that point. Calculating all UEs in a snapshot, and taking hundreds of snapshots, the SE of a dataset uniformly distributed in the topology network is obtained. The flowchart of uplink simulation method is similar with Figure 6, compared with the downlink, only in the snapshot cycle, the UE adds a step of ''power control method'' after completing the access process, which is not repeated here. The algorithm complexity of UL calculation is O(nj 2 ), which is same to DL.
D. SIMULATION RESULTS
We utilize the above method to simulate 3 frequency bands (shown in Table 1) below 2.7 GHz which is already identified for IMT. With analyzing the datasets obtained by above method, some simulation results are demonstrated in this section.
The simulation result configured snapshot=100 of access process in Band 1 and Band 3 is shown in Fig. 7, which reveals the coverage of each cell. The spots (2100 in total) of different color represent UE served in different cell. Because of 7 cells topology structure, platform antenna tilt and beamforming antenna pattern, the coverage of center cell shrinks. In the simulation process, we also found that, under this HIBS configuration, the lower the frequency band, the smaller the coverage of the central cell. Taking Band1 and band3 as an example, we can observe that the red coverage of the low frequency Band 1 is smaller. Although the coverage of the central area is shrunk, the central area has very good signal quality. We configured snapshots=1000 and carried simulation, taking SINR map of VOLUME 10, 2022 downlink for Band 1 as an example, as shown in Fig. 8, the SINR of the central cell is basically higher than 10dB, while the hot spots of the surrounding 6 cells are radially extending outward from the central cell. Hot spots throughout the HIBS area are shaped like a flower.
Then we do statistical analysis on the dataset of the simulation results (snapshots=1000). Here CDF curves are made for SINR and SE of downlink and uplink respectively, which are illustrated in Fig. 9 and Fig. 10.
According to FIG. 9 and FIG. 10, we can find that the band with lower frequency has better signal quality and spectral efficiency in the simulation under the same configuration. However, in the actual situation, the antenna size in the low-frequency band is larger, which requires higher carrying capacity of the aircraft, and the poor beam forming ability of the low-frequency band antenna needs to be overcome.
We use the 5th percentile of the CDF curve to represent the communication capability of the edge coverage of the HIBS system. As shown in Fig. 10 (b), the 5th percentile of the CDF curve of the uplink SE for Band 3 is 0 bps/Hz, which indicates that the UE is limited in uplink at the edge of HIBS and cannot access the base station equipped on HAPS. Therefore, in planning, the coverage radius of HIBS area under Band 3 should be appropriately reduced, or the capability of the airborne base station should be improved to solve this problem.
We use the average value of the dataset (snapshots=1000, UE=21000) to represent the communication capability of the HIBS system, and calculate the average SINR and SE respectively for subsequent calculation of spectrum needs. The calculation results of uplink and downlink in three frequency bands are shown in Table 3.
IV. CAPACITY DEMAND AND SPECTRUM NEEDS
When deployed as complement for existing ground-based IMT network, the capacity demands of HIBS are similar to those of terrestrial IMT, which were already identified in Report ITU-R M.2290 [29] on future capacity demands estimate for terrestrial IMT. In the case of this paper, HIBS is deployed in remote areas, where ground-based IMT base stations are yet to be deployed, HIBS would play the essential role to bridge the digital divide across the rural and remote areas. In this section, we predict the spectrum needs in 2025 year for this case of HIBS application.
A. ESTIMATE METHODOLOGY
We refer the method in Report ITU-R M.2290 to estimate capacity demand of HIBS, and combine the simulation result of spectral efficiency to predict the spectrum needs in 2025. The specific methodology flow for spectrum needs is as follows: Step 1: Refer to Recommendation ITU-R M.1768 [30] and definite the service environments (SEs) and service categories (SCs).
Step 2: According to the characteristics of the HIBS applications, select the appropriate SCs and SEs in the market data. The Spectrum requirement estimation tool for IMT [31] provides the calculation of market data for 2010, 2015 and 2020.
Step 3: Calculate the traffic demand for the selected SCs and SEs, that is, the traffic demand for the application of HIBS. The calculation method refers to the 3.5.2.6 of reference [30].
Step 4: The spectrum needs is calculated by the CapacityDemand/SpectrumEfficiency. Step 5: Fit the existing market data through different extrapolation functions such as polynomial or exponential function, and calculate the range of traffic demand in 2025 based on the fitting curve.
The SCs are defined as the combinations of service type and traffic class, while SEs are defined for the combinations of tele density and service usage patterns. They are shown in the Table 4 and Table 5 respectively.
Since HIBS in this paper's case is deployed in remote areas, where are vast territories with a sparse population, as comple- ment for ground-based IMT network, the user density is very low as well as the data rate of requirement. Therefore, it is suitable to choose SC15 and SC20 of SE6 as traffic estimates for the HIBS deployed scenarios, that is, (SC15, SE6) and (SC20, SE6) are the appropriate selection in the Step 2 of method flow. The capacity demand for HIBS is the sum of the traffic of (SC15, SE6) and (SC15, SE6).
B. INPUT PARAMETER CONFIG
Market related input parameters includes user density, session arrival rate per user, average session duration, mean service bit rate, which are used in the calculation of the capacity demand mentioned in Step 3.
The unique values for the market parameters selected from the ranges (maximum and minimum) given in [31] are determined through percentage values (0-100). Percentage value 0 means the minimum value inside the range and 100 means the maximum value inside the range. Table 6 is the corresponding market attributes of percentage value of the HIBS application for remote areas of China in 2010, 2015 and 2020. The capacity calculation for packet switched SCs is done as a multiplication of user density (U), session arrival rate per user (Q), mean service bit rate (R), and average session duration (µ). The numerical value of capacity is calculated as the multiplication of foregoing four market parameters. If all are changed at the same time, the resulting traffic calculation may become unnecessarily complicated. Therefore, the user density is the only market setting parameter that differs in the different setting.
The numerical values of market attributes used in the capacity demand calculation can be calculated as follow, e.g.: where x is the percentage in Table 6, and Q, R, µ are the same. The maximum and minimum of market attributes settings can be looked up in [31]. The three years capacity estimates are calculated according to the Table 6 and (20), which is illustrated in Table 7. In addition, the calculated results of Table 7 represents the estimation of downlink capacity demand per km 2 area for the HIBS.
C. CALCULATION RESULT
Since the 7 cells of HIBS multiplex frequencies, we only need to calculate the spectrum needs through the capacity demand of one cell, that is, the spectrum needs of the entire HIBS area. With known HIBS area radius, the area of 1 cell can be estimated by one of seven hexagons of equal area, which is: The capacity demand estimates of Band1, Band2 and Band3 for HIBS Downlink in 2010, 2015 and 2020 are calculated by result of Table 7 and (21), as well as illustrated in Table 8. It is worth noting that the data in the Table 8 for 2025 is calculated by polynomial fitting as a prediction of capacity demand in 2025.
As for the uplink traffic prediction of HIBS in 2025, according to the actual network usage in China, it is estimated according to 1/8 of the downlink traffic. The capacity demand of uplink is shown in Table 9. On the basis of the simulation results of the spectral efficiency in Table 3 and the capacity demand prediction above, the prediction results of spectrum needs in 2025 was calculated by CapacityDemand/SpectralEfficiency, which is illustrated in Table 10.
Based on the above prediction process, the total spectrum needs of HIBS in 2025 range from 80.30 MHz to 122.97 MHz. The spectrum needs will change depending on the frequency band and the spectral efficiency. It should be noted that HIBS is deployed independently in our prediction scenario, so interference from other systems is not taken into account. Interference from other systems can cause the higher HIBS spectrum needs than the predicted HIBS spectrum needs.
V. CONCLUSION
Through system simulation and traffic prediction, this paper shows that the spectrum needs range of HIBS as a supplement to IMT network in remote areas in 2025 is 80.30 MHz to 122.97 MHz. What this paper provides is only an evaluation method of spectrum needs, a stand-alone balloon providing Internet access to a remote area is a limited example of what is possible, In the future, more application scenarios of HAPS may be actually deployed, requiring more spectrum and higher frequency bands, which can be evaluated using the method in this paper. | 7,390.6 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Concepts and Components for Pulsed Angle Modulated Ultra Wideband Communication and Radar Systems
UltraWideband (UWB) systems have been utilized and commercialized since the beginning of the 1970s and have been successfully used in ground-, walland foliage-penetration, collision warning and avoidance, fluid level detection, intruder detection and vehicle radar and also for the topics of the intended research project, communication and position-location [1]. Up to now, the latter two fields have been treated separately in most developments.
Introduction
Ultra Wideband (UWB) systems have been utilized and commercialized since the beginning of the 1970s and have been successfully used in ground-, wall-and foliage-penetration, collision warning and avoidance, fluid level detection, intruder detection and vehicle radar and also for the topics of the intended research project, communication and position-location [1]. Up to now, the latter two fields have been treated separately in most developments.
UWB has the potential to yield solutions for the challenging problem of time dispersion caused by multipath propagation in indoor channels. For a local positioning system, multipath propagation determines the physical limit of the maximal accuracy that can be obtained at a given signal bandwidth [14].
There exist several techniques which are used to generate ultra wideband signals. Traditionally, UWB was defined as pulse based radio. Especially for radar and localization applications, the use of very narrow pulses is still the most dominant technique. In addition to that, there are UWB systems that use more complex modulation techniques, like multiband orthogonal frequency-division multiplexing (MB-OFDM) or direct sequence code-division multiple access (DS-CDMA) to spread the transmitted information over a large bandwidth. They are applied in communication systems whereas radar systems that use such techniques can hardly be found.
Recently there can be recognized an increasing interest for UWB technologies applied in mm-wave frequency bands. This interest is stimulated by novel regulation for future vehicular UWB systems in the 79 GHz band (77 -81 GHz) [12], novel international allocation of unlicensed bands ranging from 57 -66 GHz [9] and the attractive ISM bands at 122.5 GHz with 1 GHz bandwidth and at 244 GHz with 2 GHz bandwidth. Also, the 61.5 GHz ISM band with 500 MHz available bandwidth is often considered as a "de-facto" UWB band even though the bandwidth is just less than the bandwidth of 500 MHz usually demanded as the minimal bandwidth for UWB. The great advantage of mm-wave UWB bands is that they do not suffer from the severe power regulations known from standard UWB. At the above mentioned mm-wave UWB bands, the permitted maximum mean power density is at least 38 dB higher than in the UWB bands below 30 GHz.
Most of the mm-wave UWB communication and ranging systems published so far use a simple pulse generator as signal source. In the simplest case, a mm-wave CW carrier is modulated with an ASK (s. e.g [17]) or BPSK (s. e.g. [18])) sequence. A very interesting low-power approach that is somewhat related to the approach in this work is shown in [6] and [7]. Here, a 60 GHz oscillator itself is switched on and off. To guarantee a stable startup phase and to improve the phase noise, the oscillator is injection locked to a spurious harmonic of the switching signal. The benefit of the pulsed injection locking approach with respect to power consumption was impressively shown in this work. The general approach to obtain a stable pulse to pulse phase condition by injecting a spurious harmonic of the switching pulse into the oscillator is well known for a long time from low-power and low-cost microwave primary pulse radar systems. This basic principle can be extended in a way that frequency modulated signals can be generated based on a switched injection locked oscillator [19]. In this work, it is generalized for synthesizing arbitrarily phase modulated signals for integrated local positioning and communication. The fusion of positioning and communication capability is especially needed for future wireless devices applied in the "internet of things" or for advanced multimedia / augmented reality applications, for robot control and for vehicle2X / car2X applications.
Most existing UWB communication and ranging systems -especially those dedicated to low power consumption and mm-wave frequencies -employ simple impulse radios (IR). Popular IR-UWB modulation techniques include on-off keying (OOK), pulse-position modulation (PPM), pulse-amplitude modulation (PAM) and binary phase shift keying (BPSK) [5,17,18]. Their waveform can be synthesized using low complexity impulse generators and control circuitry, which comes at the cost of low spectral efficiency and severely limited control over spectral properties of the synthesized signals. Consequently, these transmitter cannot exhaust regulatory boundaries in all operation modes. High data rate synthesizers are often average power limited whereas low data rate implementations may be peak power limited [20].
Proposed concepts and components
In order to overcome these issues, pulsed angle modulated UWB signals are proposed to provide greater flexibility and better control over the spectral properties of the synthesized signals. Additionally, this signal type is well suited for both ranging and communication, since it allows synthesizing pulsed frequency modulated chirps that are attractive for ranging as well as digital phase modulation schemes for data transmission with the same hardware.
Since classic architectures containing VCOs, PLLs, mixer, linear amplifiers and switches are not suited for low complexity, low power systems, the switched injection-locked oscillator is suggested for signal synthesis. It regenerates and amplifies a weak phase-modulated signal. Consequently, the high frequency RF signal can be generated from a high power but efficiently synthesized low frequency phase modulated baseband signal in two simple stages -a lossy passive or low power frequency multiplier (harmonic generator) and a switched injection-locked oscillator as single stage pulsed high gain (> 50 dB) amplifier.
In this work, it is demonstrated that this approach allows synthesizing pulsed, arbitrarily phase modulated signals using the switched injection-locked harmonic sampling principle. The theory of this concept was investigated thoroughly and verified experimentally for the synthesis of phase shift keying (PSK) modulated communication signals and pulsed frequency modulated (PFM) radar signals with the same hardware. Regarding the switched injection-locked oscillator, implementations in planar surface mounted technology (6)(7)(7)(8) and integrated circuits (6-8 GHz, 63 GHz) were developed. Measurements with the first designs confirm the feasibility of the proposed concepts and already show promising results regarding transmitter signal to spur ratio and achievable ranging resolution and ranging uncertainty.
This work shows the half-term results of the ongoing project "Components and concepts for low-power mm-wave pulsed angle modulated ultra wideband communication and ranging (PAMUCOR)" within the DFG priority programme "Ultra-Wideband Radio Technologies for Communications, Localization and Sensor Applications"; for comparison, some results from the previous project "Concepts and components for pulsed frequency modulated ultra wideband secondary radar systems (PFM-USR)" are summarized. Fig. 1 depicts a pulsed angle modulated UWB signal consisting of a sequence of short pulses (width T d , period T s ), in which each pulse is an oscillation with the frequency ω osc and the modulated initial phase ϕ i :
Signal definition
For flexible signal synthesis, initial phase modulation, pulse period, pulse width and oscillation frequency can be tuned.
SILO operation principle
The switched injection-locked oscillator (SILO) is basically a normal oscillator which is turned on and off while a weak reference signal is injected into its feedback loop (see Fig. 2). During startup of the oscillator, the injection signal provides an initial condition in the oscillator's resonator instead of noise like in oscillators without injection signal. This way, the instantaneous phase of the injection signal is adopted though the oscillator runs with its own natural frequency, which may differ from the injection signal's frequency. Since the power level of the injection signal is far too low to influence the oscillation as soon as the oscillator has reached its final amplitude, it performs only phase, but no frequency locking.
Figure 2. SILO principle
This behavior can be described theoretically by: with the injection signal (center/reference frequency ω inj , phase modulation ϕ(t)) In spite of the fact that this model only describes the fundamental principle, the physical behavior of the oscillator is very similar in most operation modes. The most important disregarded physical effects observed in real implementations are: • Due to balancing imperfections e.g. in differential oscillators, high order harmonics of the startup pulse turning on the circuit cause self-locking effects that degrade the SILO's performance at low injection levels. Hence, the rise time of the oscillator should not be too short in order to reduce the harmonic power level. Obviously, this leads to a trade-off with spectral bandwidth, minimum pulse width and maximum achievable pulse repetition rate.
• The phase sampling process is affected by the amplitude of the injection signal. In consequence, amplitude variations of the injection signal are converted into phase distortions. Therefore, constant amplitude injection signals should be used to mitigate these effects. Then there is only a constant phase offset between injection and regenerated signal.
• If the rise time of the oscillator is configured to be relatively long compared to the pulse width, there will be a noticeable dependence between the injection signal's power level and pulse width. With a large amplitude injection signal, the oscillator settles much faster than when starting from noise level. Again, constant amplitude injection signals are the preferred countermeasure to avoid pulse width jitter.
Thus, the simplifications of the proposed ideal model mainly affect time and frequency domain amplitude shape, which makes this model suitable for the analysis of the phase sampling process.
Phase sampling theory
In [3,4,19], the SILO's phase sampling principle and its applications have been investigated thoroughly. The most important results will be summarized and discussed in the following.
Starting from equations (2) and (3), the SILO's output signal can be expressed by (disregarding negative frequencies and finite time domain waveform length for sake of simplicity): This expression still suggests an oscillation with ω osc -the presence of the injection signal regeneration feature that includes the frequency is not obvious. According to [4], the Fourier transform F {·} of (4) leads to: The SILO output spectrum according to (5) consists of a convolution of the user-defined phase modulation spectrum with its center / carrier frequency signal and the sampling process' aliasing signal (Dirac comb, X), see Fig. 3. It is weighted with a sinc envelope centered at the oscillator's natural frequency ω osc . Since this frequency only affects the envelope and a constant phase offset, the SILO can be regarded as a highly effective aliased regenerative amplifier. In consequence, an injected user-defined constant envelope phase modulated signal is reproduced correctly even with a free running oscillator with (in certain bounds) unknown natural frequency as long as Nyquist's sampling theorem is fulfilled (modulation bandwidth less than half pulse repetition frequency).
In general, this signal synthesis principle is not limited to phase modulated / constant envelope signal synthesis. For amplitude modulation, e.g. an electronically tuned attenuator at the SILO's output can be employed to manipulate the amplitude of each pulse synchronously to the pulse rate, which leads to a polar modulator. Since efficient pulse Figure 3. SILO output spectrum according to (5) amplitude modulation is feasible for a long time in contrast to complex phase modulation and can be added independently, this work is concentrated on the latter aspect.
Phase modulated UWB communication signals
For the synthesis of communication signals [4], any phase modulated constant envelope signal that is bandwidth limited to half pulse repetition frequency can be chosen. The maximum possible symbol rate leads to one symbol per pulse.
Demodulation can be achieved similar to existing approaches that allow quadrature pulse demodulation (e.g. [11]). Basically, the phase of each pulse has to be sampled synchronously to the pulse sequence (i.e. during pulse duration), which can be realized e.g. by quadrature baseband down-conversion and synchronized sample acquisition. In this case, the sequence of received samples is given by where Δt sync denotes a modestly (uncertainty less than half pulse width) unknown synchronization error that has to be taken into account in practice. Inserting (4) in (6) leads to: Accordingly, the original phase modulation ϕ inj is reconstructed correctly aside from a constant phase offset. Its constancy is guaranteed as long as the natural frequency of the unstabilized oscillator does not drift too fast, which is mostly given due to relatively slow changes in environmental parameters like temperature. For compensation, e.g. differential modulation schemes or short frames can be applied.
Frequency modulated UWB radar signals
Since the SILO based synthesizer is capable of generating any constant envelope phase modulated signals (within the bandwidth limit), even a frequency modulated radar signal with the bandwidth B, sweep duration T and phase can be transmitted. At the receiver, the time delayed transmit signal s(t) is mixed with a FMCW signal: According to [3], the approximate resulting beat frequency spectrum (disregarding envelope) is equivalent to the conventional FMCW spectrum except for the aliases resulting from switched operation and a constant phase offset A. The (one way) distance can be calculated from given that transmitter and receiver were precisely synchronized, which can be achieved through two-way synchronization like in [16]. Strictly speaking, the sampling theorem is not met for a sweep bandwidth larger than the pulse repetition frequency. Though, aliasing can be exploited to minimize the ramp synthesis effort (see Fig. 4). The injected and regenerated signal is configured to represent a short chirp within the sampling bandwidth that is repeated continuously. Considering aliasing, the resulting signal appears to be continuous at the receiver when sweeping through all aliases.
The required effort can even be further reduced: Since the SILO only samples certain phase values, it is not necessary to actually generate continuous sweeps as intermediate signal.
Instead, a CW injection signal with stepped phase modulation is sufficient as long as its phase (modulus 2π) equals (8) at sampling time. This approach results according to [3] in a short periodic sequence of samples (period p ∈ N + ) under the condition that the term is whole-number and p even. The sequence features a minimum period of The only restriction that results from exploiting aliases is a limitation in unambiguous range, i.e. maximum distance (phase velocity c p ): Considering a sampling period of 100 ns (T s = 10 MHz), which is convenient for low power implementations, a sufficient maximum range of over 1 km can be achieved even at a high bandwidth of 2 GHz in 1 ms.
System concepts
In the following, concepts and implementations for the pulsed angle modulated signal synthesis principle are presented. Firstly, the harmonic sampling approach is presented, which is used to take advantage of all benefits of the switched injection-locked oscillator concept by generating a high power, high frequency signal efficiently from a low frequency intermediate signal (4.1). Secondly, a frequency modulated direct digital synthesis (DDS) based upconversion approach for radar applications from the preceding project (PFM-USR) is presented as starting point for the subsequent development (4.2). Thirdly, the recent hardware concept and implementation for phase stepped modulation is described, which allows for synthesizing both frequency modulated radar signals and phase modulated communication signals with the same simple communication signal generator hardware for integrated communication and ranging. When synthesizing a high frequency pulsed angle modulated signal, classic approaches based on VCO, PLL, linear amplifier and pulsed switch are not suitable to meet goals like low complexity and low power hardware. Instead, a baseband modulator is proposed for signal generation that generates much lower frequencies than at the system's RF output, e.g. 5.8 GHz instead of 63.8 GHz. At lower frequency ranges, analog RF circuits are usually more efficient than their high frequency counterparts. The baseband signal is then applied to the input of a passive or low power non-linear element that generates harmonics, e.g. a diode or transistor (see Fig. 5). Finally, a SILO is used to amplify the upconverted signal by typically more than 50 dB (within pulse duration). Considering an instantaneous output power level of 0 to 5 dBm, an injection level of less than −45 dBm is sufficient, which allows for high losses and low power consumption in the preceding frequency multiplier stage.
Harmonic sampling approach
In order to avoid strong intermodulation products caused by the baseband modulation, it should be "slow" compared to the center frequency of the baseband signal so that the non-linear element's instantaneous input and filtered output signal can be considered approximately single tone. This requirement is needed for the SILO, which can itself only correctly regenerate constant envelope signals (apart from the fact that intermodulation products are undesirable) that are stable during the startup phase of the oscillator, e.g. FMCW signals with low ramp slope or rectangular shaped PSK with symbol rate / pulse repetition frequency much smaller than RF frequency.
Regarding maximum baseband modulation bandwidth, there exists a limit for the frequency multiplication factor n in order to guarantee spectral separation, since the bandwidth increases with the harmonic order whereas the spacing of the harmonics' center frequencies is equidistant. According to [2] (see also Fig. 5 right), the upper boundary for the multiplication factor is (harmonic center frequency f c , harmonic modulation bandwidth B):
Frequency modulated baseband upconversion
The "classic" approach towards synthesizing linear frequency modulated signals (see Fig. 6) consists of a DDS generating a low frequency reference chirp, a PLL and VCO loop and a linear power amplifier. By adding a pulsed switch at the output, pulsed frequency modulation can be realized similar to section 4.2 as long as the pulse width is short enough (the latter signal has constant phase during the pulse, the first one features slight frequency modulation).
Obviously, this classic approach has several disadvantages at high frequencies, especially power consuming linear amplifiers and a switch that dissipates more than 90% of the RF power at common pulse sequence duty cycles of less than 1:10. Therefore, a harmonic sampling approach was proposed to directly synthesize the ramp from a DDS signal while avoiding PLLs and linear amplifiers at high frequencies [2]. Due to the bandwidth restrictions with harmonic sampling (see section 4.1), a single non-linear stage is not sufficient to generate a 7-8 GHz ramp with a commercially available 1 GS/s DDS circuit. Hence, a Nyquist image from the DDS is used to shift the baseband output frequency range to 1.4-1.6 GHz (see Fig. 7).
The main advantage of this concept is that the generated pulsed frequency modulated signal features a very good linearity in comparison to simple PLL control loops and that the only active component at output frequency is a simple, efficient oscillator (SILO). Despite the simplicity of this concept, its hardware design is quite challenging, since the amplitude of a wideband sweep is subject to many inherent sources of frequency dependent amplitude behavior like DDS spectral envelope, insufficient filter flatness and the non-linear element, which increases existing amplitude variations notably.
Phase stepped modulation with CW baseband for integrating radar and communication
For integrated communication and ranging, it is desirable to construct a hardware that can synthesize signals for both domains. In the past, they have mostly been developed separately with different hardware concepts. The previously proposed concept (4.2) is well suited for radar systems, but very specific to frequency modulated signals. In fact, angle modulated communication signals can be synthesized with further reduced effort (see Fig. 8) from a CW source with a phase shifter. It is synchronized with the SILO's pulse sequence and its offset is configured to guarantee that each new phase state is stable when the oscillator is turned on. This kind of modulation technique can also be employed to generate frequency modulated signals efficiently according to section 3.5 by using an appropriate sequence of phase samples that represent a frequency chirp. Regarding hardware implementations, there are two major alternatives concerning the location of the phase shifter in the signal path. An attractive option is to add phase modulation before the frequency multiplication stage; this leads to a minimum amount of RF components and the phase shifter only needs to cover a shifting range of 30 degree, which is easy to design with good linearity. However, baseband modulation limits the multiplication factor (see section 4.1) and the phase shifter causes amplitude fluctuations that are increased in the subsequent non-linear stage. Alternatively, the phase shifter can be placed between RF filter and SILO, which allows for fast modulation, high multiplication factors with less effort (only a constant frequency single tone signal is applied), but requires a more sophisticated 360 degree phase shifter at RF frequency.
SILO concept and implementation
Consider the signal displayed in Fig. 1 and the basic SILO model depicted in Fig. 2. As a pulse width T d of 1 ns and shorter was to be accomplished, the large parasitic capacitances associated with discrete components made it clear that only an integrated solution would be suitable for implementation of the SILO.
As a benchmark for the novel circuit concept of the SILO, some key components of a more conservative concept of generating pulsed frequency modulated signals were developed in an integrated circuit.
All integrated circuits were designed in Cadence Virtuoso and simulated using the Cadence Virtuoso Spectre Circuit Simulator (Cadence, Spectre and Virtuoso are registered trademarks of Cadence Design Systems, Inc). The transmission lines and passive baluns used in the 63.8 GHz-IC were simulated in the Sonnet Professional 2.5D field simulator.
The benchmark circuit: VCO with integrated switch
To evaluate the efficiency of the SILO approach, a conventional circuit using a VCO with wide tuning range and an output switch was designed. The system with the manufactured IC is shown in Fig. 9.
The schematic of the VCO can be seen in Fig. 10, together with the half-circuit of the designed output switch. The VCO is based on a common collector Colpitts oscillator design, including a second varactor diode pair at the transistor base. It is described in detail in [8]. A short overview is given in the following.
A bipolar current mirror is used to drive the oscillator core. The emitter follower output buffer from [8] was replaced by a differential pair to increase common-mode rejection. The VCO frequency defining series resonant circuit consists of L B and C in : L B is realized as a spiral inductor without tuning capability. Tuning is available by varying C in , which has to be tuned over a wide tuning range using variable MOS-capacitance circuits.
For a minimum influence on the tuning range, C P has to be minimized. It consists mainly of the collector base capacitance C CB of transistor T and thus is given by size and bias conditions. C S , which is determined mainly by C BE , has to be maximized. Additionally, both varactor capacitance ranges have to be maximized. For a more detailed discussion, refer to [15] The proposed pulsed ultra-wideband signal generation requires a switch after the frequency synthesizing PLL. The switch should have a minimum switching time in both on and off direction to enable the usage of very short pulses (in the 1 − 10 ns range). Additionally, a constant input port impedance is important in order not to change the loading of the oscillator.
A switch circuit was designed based on [10]. The original work was aimed at a 22 − 29 GHz UWB radar for automotive applications. Fig. 10, right, shows the half-circuit.
The circuit works by switching the bias currents through branches A and B, implemented by transistors Q 1 to Q 4 and Q 5 to Q 8 , respectively. This is done by alternating the control voltages applied to switching stages Q 1 /Q 2 and Q 5 /Q 6 . The differential common base stages (Q 3 /Q 4 and Q 7 /Q 8 ) provide amplification and isolation, depending on the bias current. Transistor Q 9 provides the bias current, which is switched between the branches. Fig. 11 shows the transient simulation of the output signal for a single rising V C edge with a rise time of 5 ps. The delay between the control edge and a 90% of the output is below 250 ps. The addition of a matching network would improve insertion loss, but at the cost of worse area efficiency. The simulated input-referred noise was between 2.83 nV/ √ Hz and 3.67 nV/ √ Hz.
A combination of VCO and output switch was simulated and then manufactured.
SILO oscillator concepts
As the injection locking property is universally stemming from oscillator theory, any oscillator can in theory be employed for switched-injection locking. There is an interesting trade-off to be made when considering an oscillator configuration for SILO building: The oscillator Q-factor should be high and excess loop gain should be low for better phase noise performance on the one hand, but a high-Q oscillator with low excess loop gain takes longer to begin oscillation, which is critical for pulsed angle modulated signal generation. A careful balance between the two qualities has to be found.
Another consideration has to be put into the point in the oscillator loop where the signal is injected into. In a cross-coupled oscillator, the resonator and gain stages are directly connected to the output. This means that there has to be a buffering circuit for the injected signal which provides backward isolation, in order to ensure the oscillation frequency of the oscillator is not influenced by the circuitry connected to the tank.
For the design of the SILO circuits, we concentrated on resonator-based oscillators, as they typically show better phase noise performance than inverter-based ring oscillators. A demonstrator implementation in discrete components was used for initial experimentation and verification of the viability of our approach. This circuit was aimed at a frequency range of 6 to 8 GHz. Subsequently, a SILO IC based on a pulse generator and a cross-coupled LC-oscillator was designed and manufactured. In a final step, a harmonics generator was combined with a Colpitts oscillator to sample a 5.8 GHz-signal and emit a 63.8 GHz-signal.
6 and 8 GHz SMT SILO
For reference and for first experiments, SILO implementations based on surface mounted planar technology were realized. They are based on an ordinary common-collector Colpitts oscillator and designed for a natural frequency of 6 GHz respectively 7.5 GHz. In order to implement injection-locking, a directional coupler was added to apply the injection signal to the oscillator's output (see Fig. 12). The maximum achievable (10 dB) bandwidth is about 600 MHz at 7.5 ns pulse width.
Apart from parasitic technological limitations of lumped planar implementations, the single-ended design features an inherent source of self-locking to a harmonic of the switched power supply. Therefore, the pulse width is limited to about 10 to 20 ns in order to achieve a good compromise between bandwidth and minimum injection level. In consequence, The circuit consists of two active baluns for single-ended to differential and differential to single-ended conversion, a Schmitt-trigger with modified current mirror load for current peak generation and a simple cross-coupled oscillator for signal generation. It has an externally controllable pulse repetition rate and a pulse duration of approx. 1 ns. During operation it consumes 33 mA at 3.3 V supply voltage, while generating a > 330 mV pp signal. The generated signal has a 10 dB-bandwidth of over 2 GHz at 7.5 GHz center frequency.
7 GHz integrated circuit
Both Schmitt trigger with current peak generator and VCO with Q-degeneration circuits are shown in Fig. 13.
As efficient integrated circuits are built in a differential configuration but external circuitry and measurement equipment usually are only available in single-ended configuration, single-ended to differential (S2D) and differential to single-ended (D2S) conversion circuits are needed in the IC. We designed a simple active balun circuit that can act as both S2D-and D2S-converter. When employed as a S2D-converter, both outputs and one input are connected, when used as a D2S-converter, one output and both inputs are connected.
In order to control the pulse repetition rate externally, a Schmitt-trigger circuit with current peak generator was designed based on [13]. The circuit enables a wide variety of pulse repetition rates (1 − 80 MHz could be achieved with the measurement equipment at hand). The resistor R B together with base-emitter capacitance C BE3 controls the time constant τ current of the charging circuit: The peak generator was designed for a pulse duration of 1 ns by selecting the size of the resistor R B = 5 kΩ.
For the oscillator, a simple cross-coupled topology was chosen. As the oscillator has to lock to the injected phase, a low Q is preferable. In order to degenerate the Q, a resistor was connected in parallel to the LC-tank circuit. The current is provided by the peak generator. Fig. 13 shows the implementation.
A simple common-collector circuit is used as an output buffer to drive the 50 Ω load.
63 GHz integrated circuit
The system developed for pulsed angle modulated signal generation at mm-wave frequency is shown in Fig. 14. The input signal of 5.8 GHz is coupled into the harmonics generator, which consists of a bipolar transistor with a resonant load. The load consisting of a transmission line of inductance L 1 and capacitors C 1 and C 2 is designed to couple the wanted 11th harmonic into the transformer. Fig. 17 shows the output power for the 1st, 10th, 11th and 12th harmonic depending on the input power. For an input power > −3 dBm, the 11th harmonic is the strongest. The now differential signal is used to lock the VCO shown in Fig. 15. The signal is coupled to the collector load transmission lines of the Colpitts oscillator using a transformer with a center tap. The center tap is connected to the pulsed current source of the oscillator. The simulation of the whole system was not possible. This is due to the fact that the system works in three frequency ranges, which differ by the order of magnitudes: The 5.8 GHz input signal, the 63.8 GHz output signal and the SILO pulse repetition frequency (10 − 100 MHz). Combined with the unknown modeling of switched injection-locking in the EDA software made it more viable to design each component (harmonics generator, VCO, pulse generator) separately. 16 shows the layout of the SILO circuit with its sub-components.
Verification of sampling theory
In order to verify the theoretical predictions concerning the switched injection locked harmonic sampling approach according to section 3.3, a demonstrator based on lumped planar components was built (see Fig. 18 and 19). It consists of a 480 MHz, 0 dBm signal source, a 10 MHz DAC modulated phase shifter, a single biased bipolar transistor frequency multiplier, a band pass filter (200 MHz @ 5.8 GHz) and the 5.8 GHz switched injection locked oscillator, which is turned on and off by the digital baseband synchronously to DAC modulation. Fig. 20 depicts the spectrum at the SILO's output. It features the typical sinc shaped peak comb in pulsed mode, which is aligned to and follows the injection frequency of 5.76 GHz when changed. When tuning the oscillators natural frequency (which is according to Fig. (20) different from the injection frequency) using a varactor diode, the sinc shape of the spectrum moves on the frequency axis while the peak positions do not change. These results prove most of the main claims of the generalized sampling theory according to (5) [4].
Synthesis of communication signals
The synthesis of time domain communication signals was demonstrated using an 8 PSK modulation with cyclic transmission of all symbol values and maximum symbol rate, i.e. one symbol per pulse. The output signal of the demonstrator (Fig. 18, 19) was mixed to baseband using a quadrature mixer and displayed using an oscilloscope. Its waveform (Fig. 21) clearly shows the phase states and their repeatability in the IQ diagram. These results prove for the first time that it is feasible to generate UWB signals with more complex phase modulation than BPSK while at the same time keeping complexity and power consumption low.
Synthesis of radar signals
According to sections 3.5 and 4.3, the same simple hardware implementation used for communication signal synthesis (Fig. (18), (19)) can be employed to generate pulsed frequency modulated radar signals by repeatedly transmitting a limited list of phase samples. For a pulse rate of 10 MHz and a ramp slope of 20 MHz/μs, only 50 phase samples (one per pulse) are sufficient. For verification, this approach was realized both using the previously employed lumped components SILO (6 GHz, 600 MHz bandwidth) and the first large bandwidth integrated circuit implementations (7 GHz, >2 GHz bandwidth) in order to demonstrate the resolution benefit for ranging. The setup for both experiments is depicted in Fig. 22; the generated and delayed signal is acquired using an oscilloscope and evaluated on a PC using a numerical computation software where it is mixed with a linear FMCW signal and analyzed in frequency domain (FFT). Fig. 23 and 24 show the resulting beat frequency spectrum for the integrated circuit implementation using 1 ns pulses and 10 MHz pulse repetition rate. It corresponds to equation (10) except the small peaks that result from imperfections in the oscillator design leading to a slight turn-on pulse self-locking effect. Future designs are expected to fix this issue.
Comparing the results of the lumped and integrated circuit implementations (see Fig. 25), the benefit of much higher bandwidths regarding resolution becomes obvious. If the oscillator's spectral bandwidth is too small in relation to the sweep bandwidth, the beat frequency peak is broadened because of additional windowing through the narrowband SILO spectrum. Therefore, the oscillator bandwidth / pulse width should be adjusted to the desired sweep bandwidth in order to maximize spectral efficiency [3].
VCO with switch IC
The manufactured circuit is depicted in Fig.26. It measures 710 × 1455 μm². For reasons of nonavailability of differential equipment, all measurements were done single-ended with the unused output terminated to ground with a 50 Ω resistor. The phase noise performance of the VCO with switch has deteriorated significantly from the previous [8] stand-alone VCO. This is mainly attributed to the new buffer structure which performed worse than anticipated.
7 GHz SILO IC
The IHP Technologies SGB25V 250 nm SiGe:C BiCMOS process was chosen for manufacturing. It provides a cheap and flexible platform including one or two thick top metal layers consisting of aluminum. The advantage of using a BiCMOS process for a transmitter circuit is the possibility to build a system-on-a-chip (SoC) solution that integrates Fig. 29 shows a chip photograph with connected measurement probes. Fig. 30 shows the output power spectrum of the manufactured SILO.
The 10 dB-bandwidth stretches from 5to 8 GHz. A single pulse is shown in Fig. 31. A single cycle of oscillator start up, oscillation and decay has a duration of 1.5 ns.
Future work
Since this project is still ongoing, future work will cover further aspects that enhance theory and hardware implementation. Regarding pulsed angle modulated signals, more complex modulation schemes will be developed in conjunction with a more comprehensive study of error sources and their compensation. Furthermore, the first designs of the SILO circuit will be refined for an even better performance and higher integration level. Last but not least, hardware concepts for receiver technology are being developed. | 8,197 | 2013-03-13T00:00:00.000 | [
"Engineering",
"Physics"
] |
Enhanced Desulfurization by Tannin Extract Absorption Assisted by Binuclear Sulfonated Phthalocyanine Cobalt Polymer: Performance and Mechanism
Removal of hydrogen sulfide (H2S) from coke oven gas has attracted increasing attention due to economic and environmental concerns. In this study, tannin extract (TE) absorption combined with binuclear sulfonated phthalocyanine cobalt organic polymer (OTS) and binuclear sulfonated phthalocyanine cobalt (PDS) with a fixed bed reactor is used for removal of H2S. The effect of gas flow rate, concentration of H2S, co-existence of organic sulfide compounds and O2 were investigated. Then, the effect of total alkalinity content of TE, NaVO3, OTS and PDS was studied in detail. The experimental results demonstrated that 100% H2S conversion could maintain for 13 h at a total alkalinity of 5.0 g/L, TE concentration of 4.0 g/L, NaVO3 concentration of 5 g/L, and OTS and PDS concentration of 0.2 g/L and 0.2 g/L, respectively. The OTS and PDS showed synergistic effect on boosting TE desulfurization efficiency. The results provide a new route for the investigation of liquid catalyzed oxidation desulfurization in an efficient and low-cost way.
Introduction
With the increase in the energy demand and rapid development of chemical industry and automobile consumption, the requirements for coke are constantly rising [1,2]. Coke is often produced by carbonization of coal in the steel industry. Coke oven gas (COG) is considered as a valuable by-product in the coking process, which contains~55-60% H 2 , 23-27% CH 4 ,~5-9% CO,~1.9-4% CO 2 ,~3-5% N 2 , and~0.4-0.8% O 2 along with C 2 H 6 and other hydrocarbons, H 2 S and NH 3 , in small quantities [3][4][5]. The COG can be utilized to produce high value-added chemical products such as hydrogen, methanol and synthetic natural gas. However, the existence of H 2 S may cause the poisoning of the catalysts used in the follow-up utilization of COG, such as the Ni-based catalysts. Additionally, H 2 S is also highly corrosive to steel equipment and poisonous to human beings. It is of great importance to remove H 2 S from COG for health and environmental concerns. The main technologies for H 2 S removal from COG in traditional coking industries include the Takahax method (TH) [6], Anthraquinone disulfonic acid method (ADA) [7,8], tannin extract absorption [9,10] and the PDS method [11]. Among them, tannin extract absorption and PDS method are promising methods due to their low cost and high efficiency.
Tannin extract is a water-soluble polyphenolic substance and a kind of polymer with molecular weights of 500-3000, of which the main component is tannins. Tannin is a mixed complex, containing numerous spilt hydroxy aromatic substances with a strong ability of oxygen adsorption. It also can act as a complexing agent to produce watersoluble vanadium complex with vanadium compounds. Tannins are extracted from the boiled stalks, leaves, and barks of plants [12,13]. In typical tannic extract technology (TE) for H 2 S removal, the H 2 S could be oxidated into elemental sulfur in the presence of an oxidant such as pentavalent vanadium. Gao et al. [14,15] have investigated the H 2 S absorption by pentavalent vanadium and tannic extract with oxidization state solution using cyclic voltammetry method. H 2 S could be absorbed to produce bisulfide ion HSand polysulfide ion S x 2− . Then, these sulfur-containing substances were oxidized by pentavalent vanadium into sulfur, which could be easily separated from the liquid solution. However, the polysulfide ions S x 2− is also the main resource to generate byproducts such as thiocyanate, thiosulfate and sulfate due to its activity to react with oxygen [16,17]. These sulfur-containing compounds will cause the decline of the efficiency of desulfurization. In this regard, many efforts have been made to inhibit the production of the byproducts. Ji and co-workers [18] have conducted the research by using OTS as an additive in tannin extract absorption, which is binuclear sulfonated phthalocyanine cobalt organic polymer. The results showed that the addition of OTS could reduce the production of byproduct, where the OTS could promote the reaction of NaSCN to form NaHCO 3 . Nonetheless, the exhausted absorption solution was hardly to disposed. Equally importantly, the influence of the reaction condition on the desulfurization in tannin extract absorption is not clear, which would have great implications in an industrial application.
The PDS method is another promising technology for desulfurization which is widely used in Chinese industries, in which the binuclear sulfonated phthalocyanine cobalt is used as a catalyst to facilitate the oxidation of H 2 S [19]. The PDS method could prevent the reagent from HCN poisoning, and also possesses many advantages such as high sulfur capacity and low consumption of alkali and high oxidation speed because it can provide various double Co-N 4 active sites to boost the reaction. However, the formation of S x 2− cannot be avoided, leading to the deep-oxidated products of sulfur containing salts. Indeed, it is urgent to find a new approach with high efficiency, low cost and less byproduct production.
Herein, we explore a novel method combining the improved tannic extract absorption with OTS and PDS methods for H 2 S removal. The PDS compound showed good capacity for O 2 activation in the solution. The OTS polymer plays a vital role in the solubility of tannin extract. The NaVO 3 is used first to replace V 2 O 5 as an oxidant. Firstly, the reaction parameters including total alkalinity, content of tannin, Na 2 CO 3 , NaHCO 3 and NaVO 3 are thoroughly investigated to obtain an optimal reaction condition. Then, different additives including OTS, PDS, CuCl 2 , MnCl 2 , MgCl 2 , CaCl 2 , hydroquinone and picric acid are used for the comparative study. Multiple analytical technologies are employed to elucidate the reaction mechanism. Such acquired results could pave a new way for designing a flexible approach for H 2 S removal from industrial off-gas.
Batch Absorption Experiments
Absorption solution (200 mL) for H 2 S removal was prepared by taking different ratios of NaHCO 3 and Na 2 CO 3 to control the pH of the solution, and taking different masses of NaVO 3 and TE in a 250 mL flask and placing them in a water bath with a magnetic stirrer at room temperature. The H 2 S from the gas cylinder was diluted with N 2 (99.99%) to 300-1100 ppm, where the simulated gas stream total flow rate (Q) was fixed at 150-350 mL·min −1 . The concentrations of H 2 S in the inlet and outlet gas from the reaction system were measured by a 9790 gas chromatograph (Jiangsu Fuli Analytical Instrument Co., Ltd., China) with a flame photometric detector (FTD) at 150 • C. In a typical batch experiment, 0.8 g Na 2 CO 3 , 0.8 g NaHCO 3 , 0.8 g TE and 0.6 g NaVO 3 were added to 200 mL deionized water, and the Q was kept at 200 mL·min with 500 ppm H 2 S. The experimental equipment was shown in Figure S1.
Characterization Methods
The crystal phases were measured by the powder X-ray diffraction (XRD) patterns (Rigaku, Tokyo, Japan). The XRD systems were equipped with CuKα radiation (λ = 0.15406 nm) at a scanning rate of 1 • /min in an angle of 2θ from 10 • to 80 • .
The valances of the surface element were recorded by X-ray photoelectron spectra (XPS) (ESCALAB 250Xi, Thermo Fisher Scientific) using Al Kα radiation. All the samples were calibrated using the C 1s peak of contaminant carbon of 284.8 eV as standard.
Optimization of Reaction Parameters
The TE technology is highly dependent on the operation condition, so we conducted investigation to optimize the reaction parameters. The gas flow rate, concentration of H 2 S, organic sulfur compound and content of oxygen was thoroughly investigated, as shown in Figure 1. From Figure 1b, the desulfurization rate could obtain 100% at the first 6 h for all gas flow rates. With the time prolonged, the desulfurization rate began to decrease first in a high flow rate. It may be because the higher flow rate makes the contact between H 2 S and NaVO 3 quick. As a result, the time of consuming NaVO 3 turns to be quick, followed by the deactivation of the solution. The effect of concentration of H 2 S has shown the same trend as the gas flow rate (Figure 1a). When H 2 S is 1100 ppm, the 100% removal rate could sustain for about 6 h. With the decreasing of concentration of H 2 S, the duration time of the 100% removal rate could sustain for up to 12 h at 300 ppm. In order to figure out the influence of organic sulfur compounds, we imported COS or CS 2 separately with H 2 S (Figure 1c). It is worth noting that the COS or CS 2 could be removed completely together with H 2 S at the first 8 h. However, after 8 h, the removal rate began to descend in the presence of COS or CS 2 , while the duration could sustain for 10 h without COS and CS 2 . It means that COS and CS 2 could be transferred into H 2 S in the experimental condition, which will consume NaVO 3 , resulting in the reducing of the activity. The reaction was also investigated in the existence of 5% O 2 , as shown in Figure 1d. The reaction could be enhanced considerably in the presence of O 2 , and the duration of the 100% removal rate achieved 13 h, which is 3 h more than that of without O 2 . However, the byproducts will be produced afterwards. Additionally, the oxidation of multi-phenol structures into quinoid structures in TE could be completed by O 2 , where the quinoid structure plays a key role in the reaction from V 4+ to V 5+ .
Effect of Total Alkalinity and Na 2 CO 3 Content
The efficiency of TE technology was deeply influenced by the total alkalinity, which could be regulated by the dosage of NaHCO 3 . The impact of the amount of NaHCO 3 was investigated, and the results were shown in Figure 2a. The obtained results showed that the duration increased with the increasing of the total alkalinity. When the total alkalinity was 0.2, the absorbent was exhausted quickly in 2 h. When the total alkalinity was 1.0, the 100% conversion could be kept for 7 h, which was much longer than that of the other total alkalinity. Then, the total alkalinity was kept at 1.0 and the amount of Na 2 CO 3 was adjusted from 0.4 g to 1.2 g, where the amount of NaHCO 3 was adjusted correspondingly. The efficiency of desulfurization was shown in Figure 2b. It can be seen that the performance in the H 2 S removal increased gradually with the increase in the amount of Na 2 CO 3 when the total alkalinity is constant. The highest duration time of higher than 90% efficiency can be sustained for more than 8 h when the amount of Na 2 CO 3 was 1.2 g. The reason could be attributed to the higher alkalinity of Na 2 CO 3 than NaHCO 3 . The acidic gas H 2 S could react with the OH − in the solution to form HS − and S 2− . In the same total alkalinity, more Na 2 CO 3 could produce more OH − ions, leading to higher H 2 S removal efficiency to form HS − for further oxidation.
Effect of Content of TE
After optimizing the total alkalinity and content of Na 2 CO 3 , the effect of the content of TE was also investigated thoroughly, as shown in Figure 2c. The desulfurization duration was studied through varying the content of TE from 0.4 to 1.2 g. It could be observed that by varying the content of TE, the desulfurization duration could not be changed, of which the time was about 8 h. It can be concluded that the TE did not participate in the H 2 S oxidation and absorption directly. According to previous research [18], the product sodium divanadate of the reaction between NaVO 3 and S 2− could be re-oxidated into NaVO 3 in the presence of TE and O 2 . The reason was that quinoid structure in TE plays a key role in the oxidation process, while the multi-phenol structure has a negative effect on the oxidation which was the dominant species in TE. As a result, the content of TE in the anaerobic desulfurization exhibited a negligible effect on the H 2 S removal performance. It means that a small dosage of TE was enough for the highest activity. It was worth noting that the desulfurization duration could only sustain for 3.5 h without TE addition. The tannin extract plays a key role in complexing the vanadium compounds and the releasing of elemental sulfur. It means that the complexing effect with vanadium compounds to avoid the deposition of vanadium compounds in the water solution, which could extend the desulfurization activity.
Effect of Content of NaVO 3
As NaVO 3 is the main component in the TE technology, we further investigated the influence of the content of NaVO 3 on the efficiency of H 2 S removal. The results were shown in Figure 2d. When the amount of NaVO 3 was 0.2 g, the H 2 S could be completely removed by adsorbent in about 6 h. With the increasing of the amount of NaVO 3 , the duration of completed desulfurization extended gradually. When the amount of NaVO 3 was 1.2 g, the duration had reached 10 h, which was about twice as high than that of 0.2 g. It was interesting to see that when the adsorbent began to lose activity, the speed of inactivation was fast, indicating that the activity of absorbent was positive correlated to the content of NaVO 3 . The high efficiency of high content of NaVO 3 could be attributed to the existence of high valence state V 5+ , which has considerable oxidability. Moreover, the lattice oxygen in NaVO 3 could directly take part in the oxidation of H 2 S into elemental sulfur, instead of the dissolved oxygen in the aqueous solution. In the reaction, V 5+ had transferred into V 4+ as well as the transformation from lattice oxygen to active oxygen species. In this regard, the reaction could be enhanced by increasing the content of NaVO 3 .
TE Technology Combined with OTS and PDS
In order to enhance the efficiency of TE method, we conducted the TE technology combined with OTS and PDS addition singly and simultaneously. Additionally, CuCl 2 , MnCl 2 , MgCl 2 , CaCl 2 , hydroquinone and picric acid were used as a single additive for comparison, as depicted in Figure 3a. It is clear to see that the inorganic metal salts and organic substances both demonstrate good catalytic performance in the TE process. They acted as the catalysts for the transformation from low valence V 4+ to high valence V 5+ , resulting in the recovery of oxidants. The results showed that OTS has the best activity, while hydroquinone was the worst among those additives. The activity is in this order: OTS > PDS > MgCl 2 = CaCl 2 > picric acid > MnCl 2 > CuCl 2 > hydroquinone. The effect of OTS content was first investigated by adding different dosages of OTS into the TE system. The results were shown in Figure 3b. It is apparent that the addition of OTS could enhance the desulfurization efficiency to a large extent. Even a low amount of 0.05 g could prolong the duration from 8 h to 10 h. The longest duration of 100% efficiency was achieved by 0.1 g OTS, which was about 13.5 h. When a larger dosage of OTS than 0.1 g was added into the TE system, the activity began to descend, because the transition metal center could form a coordination compound with TE, which could boost the reaction synergistically, while excess additions will inhibit this effect. Meanwhile, the effect of PDS was also investigated separately, and the result was shown in Figure 3c. The addition of PDS demonstrated the same trend as OTS. It could achieve the highest activity at 0.1 g, then, the duration gradually descended with more OTS addition. Furthermore, the desulfurization efficiency was studied by adding both OTS and PDS. It can be seen from Figure 3d that the desulfurization efficiency could be enhanced by OTS or PDS addition only. When OTS and PDS were put into the system simultaneously, the efficiency was further boosted to about 13 h, and maintain about 100% conversion. It can be directly inferred that the co-addition of OTS and PDS has a synergistical positive effect on the desulfurization. The reason could be attributed to the different roles of OTS and PDS played in the tannin extract adsorption. When sulfonated salts such as OTS and PDS exist in the solution, the undissolved substances in the solution of tannin extract could be reduced. They could provide not only Co-N 4 active sites, but also the C 4 sites in the benzene ring which could coordinate with the ligands such as HS − and S 2− . When the OTS and PDS were added into the system simultaneously, a novel polymer was formed, leading to a higher conversation duration than adding singly. In this regard, OTS and PDS could supply considerable active sites for coordination with sulfur contained compounds. Importing the OTS and PDS into TE method, TE could be more dissolved and the catalytic activity could be enhanced due to the complexing effect of OTS polymer and PDS.
Insights into the Mechanism
From Figure 2, Na 2 CO 3 , NaHCO 3 , TE and NaVO 3 play an important role in the removal of H 2 S when the tannin extract absorption method was used. In Figure 2d, it maintained for 2 h adsorbing H 2 S stably when NaVO 3 was not added. It indicated that NaCO 3 and NaHCO 3 maintain the stability of the solution environment. From Figure 2c, the experimental results maintain for 8 h while increasing the mass of TE. It indicated that excessive TE was not necessary in the desulfurization process. To explore the H 2 S reaction mechanism, the characterization methods of XRD and XPS were used.
XPS analysis was performed on the samples to figure out the transfer and accumulation of surface sulfur element as time goes on. Figure 4 presents the S 2p XPS spectra of the sample. Six different kinds of sulfur peaks in the spectrum were attributed to S 2p (Figure 4). The main surface species were S 2− P1/2 , S 2− P3/2 , V-S, element S (S 0 ), S-O and S n [20,21]. For S 2− P1/2 and S 2− P3/2 , the peaks disappeared at 1 h, then appeared at 161.85 eV and 162.73 eV separately, followed by a shift to lower binding energy (BE) of 161.71 eV and a higher value of 162.93 eV. Lastly, it had the same tendency of lower BE of S 2− P1/2 and higher S 2− P3/2 [22][23][24]. It indicated that S 2− was transferred to other S species rapidly and the orbitals of S 2p were broken up (i.e., S 2− P3/2 joined to chemical reaction while S 2− P1/2 was steady and dispersed). As for the V-S, the peaks appeared at 163.64 eV and 163.44 eV at 1 h and 4 h separately [25,26]. Then, the peaks disappeared in the following experiments. It indicated that V-S formed stably due to the negligible low-shifting BE. The V transferred to the S to the active sites because of disappearing of V-S peaks. The S 0 peaks always formed in the reaction process. It indicated that S 0 was formed in the reaction. Over time, the BE displacement is 164.84 eV, 164.58 eV, 164.34 eV, 164.62 eV. It demonstrated that the products of S 0 shift to the direction of low bounding energy with low stability in the reaction process. However, other products were formed to enhance the stability of S 0 after 8 h. With regard to S-O, it appeared at BE of 168.26 eV, 168.24 eV, 167.94 eV and 167.59 eV the whole time. It indicated that S-O was unstable due to the low shift of BE in the process. It was clear to see that the peak area of S-O in the case of 8 h was much lower than at 7 h. It may be because the V transferred to the S to the active sites of O after V-S disappearing. As for the S n peaks, it was formed after 7 h, and the BE were 167.94 eV and 168.74 eV. It indicated that the product of S n was unsteady because of the reducing BE. Moreover, the formation of S n was a benefit to the stabilization of S 0 . Figure 5 presents the S 2p XPS spectra from the optimum amount of Na 2 CO 3 at different times. It maintained the optimum total alkalinity. Compared with Figure 4a, the peaks of S 2− were appearing while V-S was disappearing in Figure 5a. It indicated that the reaction is dominated by the migration of sulfur ions with the increase in hydrolysis temperature in the solution. At 4 h (Figure 5b), the peak of S n was formed while it was not in Figure 4b. It indicated that the reaction promotes the formation of product S with the increase in hydrolysis temperature in the solution. In Figure 5c, the peak areas of S n and S 0 were higher than that in Figure 4c. It indicated that the reaction promotes the formation of products of S and S n with the increase in hydrolysis temperature in the solution. Figure 6 presents the S 2p XPS spectra from the optimum amount of NaVO 3 at different times. It maintained the optimum amount of Na 2 CO 3 . Compared with Figure 5a, the peaks of V-S were appearing while S 2− was disappearing in Figure 6a. It indicated that V-S was formed with the increasing mass of NaVO 3 . At 4 h (Figure 6b), the peak of S n disappeared while it was formed in Figure 5b. It indicated that the reaction was not conducive to the formation of products of S n in solution. In Figure 5d, the peak of V-S still exists. It indicated that V-S appeared in the whole reaction. It was hard to detect due to the low content of NaVO 3 . The results of the fresh and exhausted residue after filtration of absorbent were studied to explain the details of the main reaction products and mechanisms. The XPS results indicated that H 2 S was oxide by NaVO 3 to form the product of S 0 and Sn. The XRD of NaVO 3 was shown in Figure 7. The results show that the crystalline phases were NaVO 3 (PDF #75-0716). It was of an orthorhombic lattice type with a space group Pnma. As a result, the possible reaction mechanism is as followed: where the quinoid structure of TE is denoted as TQ, and the multi-phenol structure of TE is denoted as YHQ. The addition of OTS or PDS will enhance the oxidation reaction (2), as they provided transition metal active centers for the oxidation of HS − to elemental S. They take part in the redox cycle of V 5+ -V 4+ -V 5+ as active catalysts. The PDS and OTS were binuclear sulfonated phthalocyanine cobalt compounds and its polymer, and they provided active Co-N 4 sites to active O 2 . As a result, the V 4+ could be easily oxidated back into the active phase of V 5+ . In addition, the YHQ could be oxidated by O 2 in the following regeneration process to return to TQ, forming H 2 O 2 at the same time. The H 2 O 2 could also oxidate the V 4+ transferring to V 5+ , HS-into elemental sulfur. The reactions are as follows: It is worth noting that further oxidated products such as S 2 O 3 2− and S 2 O 4 2− will be produced if the desulfurization solution contains NaHS in the presence of oxygen in the regeneration process. Therefore, the content of TE and NaVO 3 are the key factors influencing the formation of byproducts, which should be controlled carefully.
Conclusions
In this study, the tannin extract technology combined with binuclear sulfonated phthalocyanine cobalt polymer (OTS) and compound (PDS) for H 2 S removal was thoroughly investigated. The reaction parameters including gas flow rate, concentration of H 2 S, coexistence of COS or CS 2 and O 2 were first optimized. High gas flow rate and concentration of H 2 S could lead to quick activity loss of desulfurization solution. The organic sulfurcontaining compounds (COS and CS 2 ) could be removed simultaneously. Then, the total alkalinity, the content of TE, NaVO 3 , OTS and PDS were studied in detail. The results demonstrate that the suitable conditions for the H 2 S removal are a total alkalinity of 5.0 g/L, TE concentration of 4.0 g/L, NaVO 3 concentration of 5 g/L, and OTS and PDS concentrations of 0.2 g/L and 0.2 g/L, respectively. OTS could be beneficial for the solubility of TE. In addition, OTS and PDS have shown a catalytical effect on the redox cycles of V 5+ -V 4+ -V 5+ and TQ-YHQ-TQ synergistically by supplying both Co-N 4 and C 4 in benzene ring active sites, which facilitates the H 2 S oxidation into elemental sulfur. This combined method provides new ideas for the development and research of desulfurization in industrial applications.
Author Contributions: Writing-original draft preparation, B.W. and H.C.; formal analysis, X.H.; resources, K.L.; data curation, X.S.; writing-review and editing, Y.L. and P.N. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the data in this study involve enterprise production.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,160 | 2023-03-01T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
HulTech: A General Purpose System for Cross-Level Semantic Similarity based on Anchor Web Counts
This paper describes the HULTECH team participation
in Task 3 of SemEval-2014. Four
different subtasks are provided to the participants,
who are asked to determine the semantic
similarity of cross-level test pairs: paragraphto-
sentence, sentence-to-phrase, phrase-toword
and word-to-sense. Our system adopts
a unified strategy (general purpose system) to
calculate similarity across all subtasks based
on word Web frequencies. For that purpose,
we define ClueWeb InfoSimba, a cross-level
similarity corpus-based metric. Results show
that our strategy overcomes the proposed baselines
and achieves adequate to moderate results
when compared to other systems.
Introduction
Similarity between text documents is considered a challenging task. Recently, many works concentrate on the study of semantic similarity for multi-level text documents (Pilehvar et al., 2013), but skipping the crosslevel similarity task. In the later, the underlying idea is that text similarity can be considered between pairs of text documents at different granularities levels: paragraph, sentence, phrase or word. One obvious particularity of this task is that text pairs may not share the same characteristics of size, context or structure, i.e., the granularity level.
In task 3 of SemEval-2014, two different strategies have been proposed to solve this issue. On the one hand, participants may propose a combination of individual systems, each one solving a particular subtask. On the other hand, a general purpose system may be proposed, which deals with all the subtasks following the exact same strategy.
In this paper, we describe a language-independent corpus-based general purpose system, which relies on a huge freely available Web collection called Anchor-ClueWeb12 (Hiemstra and Hauff, 2010). In particular, we calculate ClueWeb InfoSimba 1 a cross-level seman-This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ 1 It is a Web version of InfoSimba (Dias et al., 2007).
tic similarity based on word-word frequencies. Indeed, these frequencies are captured by the use of a collocation metric called SCP 2 (Silva et al., 1999), which has similar properties as the well studied PMI-IR (Turney, 2001) but does not over-evaluate rare events.
Our system outputs a normalized (between 0 and 1) similarity value between two pieces of texts. However, the subtasks proposed in task 3 of SemEval-2014 include a different scoring scale between 0 and 4. To solve this issue, we applied linear, polynomial and exponential regressions as three different runs. Results show that our strategy overcomes the proposed baselines and achieves adequate to moderate results when compared to other systems.
System Description
Our system is based on a reduced version of the ClueWeb12 dataset called Anchor ClueWeb12 and an informative attributional similarity measure called In-foSimba (Dias et al., 2007) adapted to this dataset.
Anchor ClueWeb12 Dataset
The Anchor ClueWeb12 dataset contains 0.5 billion Web pages, which cover about 64% of the total number of Web pages in ClueWeb12. The particularity of Anchor ClueWeb12 is that each Web page is represented by the anchor texts of the links pointing to it in ClueWeb12. Web pages are indexed not on their content but on their references. As such, the size of the index is drastically reduced and the overall results are consistent with full text indexing as discussed in (Hiemstra and Hauff, 2010).
For development purposes, this dataset was indexed in Solr 4.4 on a desktop computer using a batch indexing script. Particularly, each compressed part file of the Anchor ClueWeb12 was uncompressed, preprocessed and indexed in a sequential way using the features of incremental indexing offered by Solr (Smiley and Pugh, 2009).
InfoSimba
In (Dias et al., 2007), the authors proposed the hypothesis that two texts are similar if they share related (eventually different) constituents. So, their concept of simi-larity is not any more based on the exact match of constituents but relies on related constituents (e.g. words). For example, it is clear that the following text pieces extracted from the sentence-to-phrase subtask are related 3 although they do not share any word.
an uncouth young man
The InfoSimba similarity measure models this phenomenon evaluating individual similarities between all possible words pairs. Indeed, each piece of text is represented by the vector of its words. So, given two pieces of texts X i and X j , their similarity is defined in Equation 1 where SCP (., .) is the Symmetric Conditional Probability association measure proposed in (Silva et al., 1999) and defined in Equation 2. . (2) Following the previous example, the In-foSimba value between the two vectors X 1 = {"he", "is", "a", "nose-picker"} and X 2 = {"an", "uncouth", "young", "man"} is an average weight formed by all possible words pairs associations as illustrated in Figure 1. Note that each vertex is a word of a X l vector and each edge is weighted by the SCP (., .) value of the connected words. In particular, each w ij corresponds to the word at the j th position in vector X i , P (., .) is the joint probability of two words appearing in the same document, P (.) is the marginal probability of any word appearing in a document and p (resp. q) is the size of the vector X i (resp. X j ). In the case of task 3 of SemEval-2014, each text pair is represented by two word vectors for which a modified version of InfoSimba, ClueWeb InfoSimba, is computed.
ClueWeb InfoSimba
The final similarity metric, called ClueWeb InfoSimba (CW IS), between two pieces of texts is defined in Equation 3, where hits(w) returns the number of documents retrieved by Solr over Anchor ClueWeb12 for the query w and hits(w a ∧ w b ) is the number of documents retrieved when both words are present simultaneously. In this case, SCP is modified into SCP-IR similarly as PMI is to PMI-IR, i.e., using hits counts instead of probability values (see Equation 4).
System Input
The task 3 of SemEval-2014 consists of (1) paragraphto-sentence, (2) sentence-to-phrase, (3) phrase-to-word and (4) word-to-sense subtasks. Before submitting the pieces of texts to our system, we first performed simple stop-words removal with the NLTK toolkit (Bird et al., 2009). Note that in the case of the word-to-sense subtask, the similarity is performed over the word itself and the gloss of the corresponding sense 4 .
Output Values Transformations
The CW IS(., .) similarity metric returns a value between 0 and 1. However, the subtasks suppose that each pair must be attributed a score between 0 and 4. As such, an adequate scale transformation must be performed. For that purpose, we proposed linear, polynomial and exponential regressions and submitted three different runs, one for each regression 5 . Note that the regressions have been tuned on the training dataset using the respective R regression functions with default parameters: • lm(y ∼ x), • lm(y ∼ x + I(x 2 ) + I(x 3 )), where 6 is a small value included to avoid undefined log values. The regression results on the test datasets are presented in Figure 2. Figure 2: Linear, polynomial and exponential predictions for the test dataset of the paragraph-to-sentence subtask (colored dots). Black dots correspond to the obtained ClueWeb InfoSimba value versus the manually assigned score in the training dataset.
Evaluation and Results
For evaluation purposes, two metrics have been selected by the organizers: Pearson correlation (Pearson, 1895) and Spearman's rank correlation (Hollander and Wolfe, 1973). Detailed information about the evaluation setup can be found in the task discussion paper (Jurgens et al., 2014).
All results are given in Tables 1 and 2 for each run. Note that the baseline metric is calculated for the longest common string (LCS) and that each regression has been tuned on the training dataset for each one of the four tasks.
First, in almost all cases, the results outperform the baseline. Second, performances show that with a certain amount of information (longer pieces of texts), interesting results can be obtained. However, when the size decreases, the performance diminishes and extra information is certainly needed to better capture the semantics between two pieces of text. Third, the polynomial regression provides better results for the Pearson correlation evaluation, while for the Rho test, linear and polynomial regressions get the lead. Note that this situation depends on the data distribution and cannot be seen as a conclusive remark. However, it is certainly an important subject of study for our unsupervised methodology.
Another key point is that training examples were used only for evaluation purposes 7 . In the case of Spearman's rank correlation, the linear and exponen-
Conclusions
In this paper, we proposed a general purpose system to deal with cross-level text similarity. The aim of our research was to push as far as possible the limits of language-independent corpus-based solutions in a general context of text similarity. We were also concerned with reproducibility and as such we exclusively used publicly available datasets and tools 8 . The results clearly show the limits of a simple solution based on word statistics. Nevertheless, the framework can easily be empowered with the straightforward introduction of more competitive resources. | 2,129.4 | 2014-08-23T00:00:00.000 | [
"Computer Science"
] |
The Statistical Approach for Overcoming the Sensor Saturation Effect in Passive Ranging
1Abstract—Grey level intensity distribution on thermal infrared images is estimated in this research. General Pareto distribution describes grey level distribution better than other considered statistics. Error of passive ranging distance estimation based on intensity method is too large at short distance, because grey level of imaging sensor is saturated. Suggested modification has a great application in compensation of the effect of sensor saturation. Experiment on real saturated infrared sequence demonstrates that distance estimation error of suggested approach increases around three times slower compared to the conventional intensity based approach.
I. INTRODUCTION
Optical flow and stereo vision are the most common techniques used to passively estimate distance to an object.Both methods are relied on the geometrical principle of triangulation.In optical flow, the baseline is created due to the sensor motion, whereas in stereo the distance between cameras (baseline) is fixed [1].Number of used sensors varies from one in optical flow method, through two for single baseline approach, to three or more for multiple baseline method and methods exploring the network of passive sensors.This research is focused on scenarios where only one passive imaging sensor is available.
A few different approaches for passive ranging using a single image sensor are known.The methods presented in [2] and [3] utilize size changes of a target in the image sequence to compute distance to the target.The approach for tracking emissive targets by a monocular passive sensor presented in [4] is based on atmospheric oxygen absorption in near-infrared spectrum, since research [5] utilizes spectral attenuation of two oxygen absorption bands in the visible and near-infrared spectrums for distance to target estimation.
In a recent research [6] two new passive ranging methods based on intensity and contrast measurements are proposed and compared with method based on object size measurement in [7].It is shown in [6] that error of distance estimation based on contrast method is less than that of the produced by size changes based method.Moreover, intensity based method produces even better results than contrast method.Real life application of distance estimation based on intensity measurements, using one [6] or more [8] passive sensors, is limited by characteristics of used sensors.The effect of sensor saturation has a significant influence on distance estimation accuracy at relatively short distances [6]- [9].After extracting the target's pixels from the background, level of pixel intensity is bounded with threshold from one side.Maximal pixel intensity that can be measured causes another limit, known as sensor saturation.
Target tracking approach with two passive infrared sensors suggested in [8] cannot be used in cases where the distance between the sensors and the target is less than the value determined by the maximum output power of the sensors (saturation limit).It is well known that influence of sensor saturation effect increases as distance from the target to the sensor decreases.Research [9] overcomes saturation from atmospheric propagation model estimation by it fusion with object surface measurements and target motion analysis.
The main goal of this research is enhancement of distance estimation based on intensity measurements when sensor operates in saturation conditions.Relevant literature does not include many reports on the phenomenon of saturation using single sensor, such that this paper is deemed to be a modest contribution to the important field of passive ranging.
Since the distance estimation is based on target's intensity measurements, it is essential to determine intensity in both cases, normal conditions and sensors saturation.The first case is solved in [6], and this research addresses the second, more complicated case.It is expected that the mean target's intensity in the saturation can be estimated more correctly with suggested method based on object's statistics knowledge and relevant measurements in saturation than by conventional methods based on measurements.
Another important topic this research is deal with, is to find distribution fitting the target's statistic the best, and estimate their parameters in saturation.Using the Quantile-Quantile (QQ) plots of the real infrared video sequence of a State University of Novi Pazar, Serbia<EMAIL_ADDRESS>which is moving smoothly toward the sensor, the same statistical distribution of the target's pixels is verified before sensor saturation occurred.Eight commonly used statistical distributions are considered.Coefficient of linear correlation between real data quantiles and appropriate theoretical distribution quantiles on the QQ-plot is used as quality criterion for distribution fitting, and it is found that General Pareto (GP) distribution satisfied established criterion with the highest rank.
The GP distribution parameters can be determined by various methods [10], [11].This research suggests a procedure for the estimation of GP distribution parameters, and an estimation of the average of the target's grey level in saturation.It is shown that, relying on target statistics in normal operation conditions and relevant measurements in saturation, significant improvement of distance estimation can be achieved.The quality of suggested method is tested on real infrared sequence, and a relative error of distance estimation in saturation is up to ten percent smaller than error of standard intensity approach.
The rest of paper is organized as follows.Section II describes the posed problem.Section III is dedicated to the QQ-plot analysis of the intensities of the target's pixels on a real infrared video sequence.Section IV deals with the choice of distribution fitting target statistics.The procedure for the estimation of GP distribution parameters; estimation of the average of the target's grey level in saturation and their verification through the experiment on real infrared sequence are described in Section V.
II. SENSOR SATURATION PROBLEM
The application of intensity based method on the real infrared sequence is studied.The sequence is recorded using the Dual Observer Passive Ranging System (DOPRS) that is designed for tracking a single airborne target.The system utilizes two thermal cameras and calculates distance by triangulation method.
In this research, sequence from one camera is used, while the distance obtained from the DOPRS is used as the reference distance in analysis and comparison of results.The reference distance to the target in the analysed image sequence is determined with an absolute error of less than five meters.Figure 1 The original approach suggested in [6] does not produce acceptable distance estimation when target is close to the sensor as a consequence of target intensity saturation.Probability of the target pixels intensity in the i th frame is defined as Figure 2 shows probability of target pixel intensity (1) as a function of the intensity level in the first frame of the analysed sequence.It may be noted that the vector p(1) is equal to zero when the pixel intensity is less than 69, which is the threshold value for the current frame ((1) = 69).
Figure 3 presents the results of the applying (1) on the 350 th frame.In addition to p(350) values are zero when pixel intensity is less than 136 ((350) = 136), a high value of probability can be noted at the upper limit of the range, as a result of saturation of the sensor.The result of applying (1) to the whole sequence of images is a probability matrix (1) (2) ( 1) ( ) , ,..., , , which means that the matrix PT size is 256 N. The matrix PT is illustrated in Fig. 4, and it can be noted that threshold parameter increases from the first frame ((1) = 69) to the 350 th frame ((350) = 136), and high probabilities in matrix PT (bright pixels in Fig. 4) move from the lower limit of the range (Fig. 2) to the upper limit of the range (Fig. 3).Saturation becomes dominant after the 300 th frame, meaning that intensity of more than 20 percent of the target's pixel (white color on p(i) = 255) is measured as upper sensor limit.
III. QQ-PLOT SEQUENCE ANALYSIS
It is assumed that the mean target's intensity in the saturation can be estimated on the basis of object's statistics.As one target is in the sequence, it is expected that its intensity has the same distribution function in the entire sequence, since the parameters of this distribution can change from frame to frame.199 QQ-plots from analysed IR sequence of the target pixels intensities from the frame (i) versus intensities from the frame (i-1) are shown in Fig. 5, where i takes values from 2 to 200.The short review of the QQ-plot technique is presented in [13].
It can be seen that most of the quantiles are close to the straight line of constant slope (45) throwing the coordinate origin, indicating the same distribution of target intensity over the sequence [13].Deviations from this line imply that the distribution parameters are changing with time of sequence, as expected with regard to the target's intensity levels shown in Fig. 4.
IV. TARGET'S INTENSITY DISTRIBUTION CONSIDERATION
It is assumed that the distribution of the intensity levels of the target can be described using one of the following distributions: Weibull distribution; Birnbaum-Saunders (BS) distribution; Gaussian distribution; Gamma distribution; Nakagami distribution; Lognormal distribution; Inverse Gaussian distribution; General Pareto (GP) distribution [10], [11].The real data are fitted on the above distributions using maximum likelihood method, and as a result two parameters, describing appropriate distribution, are obtained for each sequence frame.With the aim of selecting which of the distributions best fits the real data, the following procedure is performed: -QQ-plots of the analysed theoretical distribution against the real data is done, for first N=200 frames in the video sequence.
-Mean value -1:N r of all N = 200 linear correlation coefficients for analysed distributions is calculated.
Results of analyses are presented in Table I.On the basis of the given results it can be concluded that GP distribution the most closely describes the target's intensity for established criterion: mean value of linear correlation coefficients for analysed two hundred frames is the largest: 1: 0.9937 N r .As an example, Fig. 6 and Fig. 7.
show a QQ-plots of real data quantiles from one frame versus theoretically quantiles of GP (the first ranked) and BS (the second ranked) distributions, respectively, where straight lines represent a perfect match to distributions.The QQ-plot in Fig. 6 approximately matches the straight line, while in Fig. 7 deviations from the line are significant, which is an additional confirmation of the correctness of the distribution function choice.
V. EXPERIMENTAL RESULTS
Having previously established that the distribution of target's pixel intensity has a GP distribution, the distance to the target in saturation will indirectly be estimated on the basis of estimations of the shape parameter k and scale parameter of the GP distribution (Appendix A), and the measured values of the threshold .It is assumed that the sensor saturation occurs when more than 20 % of the target's pixels have a maximum pixel intensity level, which in the analysed sequence occurs after 289 frames.The GP distribution parameters k and can be determined by various methods [10], [11], depending on their range.The ranges of estimated parameters are: In [6] to estimate the distance to the target, average of the target's grey level (intensity) is used.In order to preserve information about the intensity mean it is more convenient to use the method of moments (MOM) instead of the maximum likelihood method to estimate parameters of the GP distribution.Figure 8 According to [6], the distance to the target in th i frame is where D(i) and D(i-1) are distances from the object to the sensor, and I(i) and I(i-1) are the average grey levels of the object on two successive frames.Initial range D(1) and a reliable estimate of extinction coefficient ς are required.For this experiment, coefficient ς is estimated on the test sequence preceding the analysed, as suggested in [6], while initial distance D( 1) is taken from DOPRS.Intensity of the target is presented in Fig 10 .Grey line denotes intensity measured directly from the sequence denoted with x , while black line identifies intensity estimation based on target statistics (Appendix A) ( , , , where isat represents ordinal number of the frame at the time when saturation is detected. It can be noted that before the saturation occurred, estimated intensity Î had a good fit to the measured intensity x , since the estimation of Î is performed on the basis of mean and variance of a real signal.In saturation Î and x diverge, as expected.Distances to the target in the analysed sequence are calculated based on (3) and intensities Î and x , and results with a true distance from DOPRS are shown in Fig. 11.Distance D(1) from the DOPRS is used for initialization of (3) in both calculations.Relative errors of distance estimation obtained by original approach [6] and suggested modification are shown in Fig. 12. Significant improvement of distance estimation is clearly observable in saturated region (right side in Fig. 12.).Distance estimation error of GP based approach increases around three times slower compared to the conventional intensity based approach.This confirms the assumption that the distance to the target in the saturation can be estimated more accurately based on determined object's statistics and relevant measurements in saturation than by conventional method [6] based on raw measurements.
VI. CONCLUSIONS
In authors' recent research [6], two new passive ranging methods based on intensity and contrast measurements are proposed and compared with the method based on object size measurements.It is shown that error of distance estimation based on contrast method is less than that of the produced by the size changes based method.Moreover, intensity based method produces even better results than contrast method.Error of passive ranging distance estimation based on intensity method is unacceptably large at short distances, since grey level of acquisition sensor is saturated.
This research suggests the extension of the intensity method for passive ranging using a single camera operating in normal and saturation conditions, enabling the significantly better distance estimation at short ranges.Estimation of the target's mean value of grey level in nonideal conditions (sensor saturation) is based both on evaluation of object statistics and image measurements, instead of using the only image measurements in standard intensity based approaches.It is found that general Pareto distribution is fitting real target intensity best, compared to other distributions analysed in the paper.A simple algorithm for GP parameters estimation and prediction is suggested and experimentally confirmed.
Experiment on real saturated infrared sequence verifies that a relative error of distance estimation is up to ten percent smaller and increases around three times slower compared to error of the conventional intensity based approach.Although the distance estimation error increases with intensity saturation time, the new approach enables additional time for target tracking, depending on its speed and size as well as of used image sensor characteristics.
APPENDIX A
Let X be a random variable.The cumulative GPD function with location, shape, and scale parameters ( ) , ( ) k k and ( 0) , respectively, is defined as 1 ( | , , ) 1 1 , where 0 k and probability density function for the three parameters generalized Pareto distribution is In applications when threshold parameter is known, its subtraction from signal X allows the use of two parameters GPD ( 0 ), then (5) becomes 1 ( | , ) 1 1 , where 0 k and (6) transforms to 1 1 ( | , ) 1 .
The mean value and the variance of GPD( , k ) have the following expressions where 1 k .
where 0.5 k , while the mean value of three parameters GPD( , , where 1 k .The time-honoured and direct MOM is widely used for estimating the parameters of the two-parameters GP distribution [10].The MOM estimates of parameters k and are obtained from expressions for the mean (9) and the variance (10), as follows: ˆ1 , 2 provided 0.5 k , where x and 2 s stand for the sample mean and the sample variance, respectively [10].
Manuscript received March 16, 2013; accepted September 17, 2013.Ministry of Education, Science and Technological Development of the Republic of Serbia, supported this work (projects III-47029 and TR-32023).
(a) shows the first, Fig. 1(c) the 150 th and Fig. 1(e) the 350 th frames of the analysed infrared sequence.Scene intensity in infrared image is represented on grayscale image among the 256 levels of grey.Target in Fig. 1(e) has a significant amount of white pixels, indicating the saturation of the sensor.
where 1 , 1 .
Fig. 2. Probability (1)p of the target pixels intensity of the first frame.
Fig. 3 .
Fig. 3. Probability (350) p of the target pixels intensity of the 350 th frame.
Fig. 4 .
Fig. 4. Probability matrix PT of the target pixels intensity in the sequence.
different notations for the cumulative general Pareto distribution (GPD) function (sign of the shape parameter), notation used through this research is given in the Appendix A.
shows estimation of shape parameter k to the frame when sensor enters the saturation (grey circles).Parameter k data are fitted as a linear polynomial function based on estimations before the sensor saturation arises, and as a result, prediction ˆP k is obtained (full line), parameter before sensor saturation are shown with grey circles in Fig.9.Parameter data are fitted as a quadratic polynomial function based on estimations before the saturation occurred, and prediction ˆP of scale parameter is obtained as a result (full line)
Fig. 10 .
Fig.10.Intensity of the target: grey line -direct measurements from the sequence; black line -calculated values based on GP estimated shape and scale parameters, and determined threshold.
Fig. 12 .
Fig. 12. Relative errors of distances estimations in the sequence.
TABLE I .
DISTRIBUTION FITTING RESULTS. | 4,099.2 | 2014-05-02T00:00:00.000 | [
"Materials Science"
] |
Receptor tyrosine phosphatase PTPγ is a regulator of spinal cord neurogenesis
During spinal cord development the proliferation, migration and survival of neural progenitors and precursors is tightly controlled, generating the fine spatial organisation of the cord. In order to understand better the control of these processes, we have examined the function of an orphan receptor protein tyrosine phosphatase (RPTP) PTPγ, in the developing chick spinal cord. Widespread expression of PTPγ occurs post-embryonic day 3 in the early cord and is consistent with a potential role in either neurogenesis or neuronal maturation. Using gain-of-function and loss-of-function approaches in ovo, we show that PTPγ perturbation significantly reduces progenitor proliferation rates and neuronal precursor numbers, resulting in hypoplasia of the neuroepithelium. PTPγ gain-of-function causes widespread suppression of Wnt/β-catenin-driven TCF signalling. One potential target of PTPγ may therefore be β-catenin itself, since PTPγ can dephosphorylate it in vitro, but alternative targets are also likely. PTPγ loss-of-function is not sufficient to alter TCF signalling. Instead, loss-of-function leads to increased apoptosis and defective cell–cell adhesion in progenitors and precursors. Furthermore, motor neuron precursor migration is specifically defective. PTPγ therefore regulates neurogenesis during a window of spinal cord development, with molecular targets most likely related to Wnt/β-catenin signalling, cell survival and cell adhesion.
Introduction
The spinal cord is an excellent model system in which to study the temporospatial proliferation, migration and differentiation of neuronal progenitors (Poh et al., 2002). The pre-patterning of the cord dorsoventrally by morphogens lays down progenitor domains for interneurons and motor neurons (MNs) (Poh et al., 2002) and these progenitors then generate precursors that migrate laterally to their final positions, before differentiating (Shirasaki and Pfaff, 2002). Many of these cellular events are driven by receptor-mediated cell signalling, interpreting signals for example from shh, Wnts and BMPs (Cayuso and Marti, 2005;Ulloa and Briscoe, 2007). These overlapping signalling networks are modulated in part by cross talk with pathways governed by protein tyrosine phosphorylation, in turn controlled by protein tyrosine kinases (PTKs) and protein tyrosine phosphatases (PTPs). Phosphotyrosine signals govern isthmusinduced patterning (Sato et al., 2004), oligodendrocyte production (Pringle et al., 2003), and MN production and survival (Gauthier et al., 2007;Huang and Reichardt, 2003;Jungbluth et al., 1997;Ke et al., 2007). The proliferation of neural stem cells and progenitors also requires receptor tyrosine kinase (RTK) signalling (reviewed in (Cameron et al., 1998). Although tyrosine kinase actions are generally well defined, the roles of the tyrosine phosphatases during neurogenesis are poorly understood. A family of 21 receptor-like PTPs (RPTPs) exists (Alonso et al., 2004;Stoker, 2005;Tonks, 2006) and some of these control axon growth and guidance (Burden-Gulley et al., 2002;Ensslen-Craig and Brady-Kalnay, 2004;Rashid-Doubell et al., 2002;Shintani et al., 2006) and neuronal survival (Sakaguchi et al., 2003;Tisi et al., 2000). Dynamic expression of RPTP genes has been observed in the early developing brain and spinal cord (Chilton and Stoker, 2000;Gustafson and Mason, 2000;Ivanova et al., 2004;Sommer et al., 1997), raising the possibility that RPTPs also play a part in controlling neurogenesis and initial neuronal maturation. Gene deficiency models in mice have not currently led to a greater understanding of this area. For example, concomitant loss of both the PTPσ and PTPδ genes does lead to loss of spinal motor neurons, but only long after the MN pools have developed normally and extended axons (Uetani et al., 2006).
PTPγ is an avian RPTP expressed in the early spinal cord. It is a type V RPTP, expressed in the first spinal interneurons in chick (Gustafson and Mason, 2000) and later more broadly in the spinal cord (Chilton and Stoker, 2000). The cellular role of PTPγ in the cord is unknown, but the protein has been implicated in suppressing neurite formation in PC12 cells (Shintani et al., 2001) and suppressing the growth of breast cancer cells (Liu et al., 2004). To assess the early functions of PTPγ during neurogenesis we have used gain-(GOF) and loss-of-function (LOF) approaches in the early chick spinal cord. Our results suggest that PTPγ has several interrelated functions in controlling proliferation, survival and adhesion of progenitors and precursors.
Expression pattern of PTPγ in the early avian spinal cord
By stage HH15 of chick embryogenesis, early dorsal spinal interneurons and brachial motor neurons (MNs) of the lateral Fig. 1. Expression of PTPγ in the early spinal cord. Expression of the chick PTPγ gene was detected with in situ hybridization at stages HH18 (F), HH20 (L), HH22 (R, S) and HH26-27 (E5) (T) and compared to localisation of neurofilament-associated protein 3A10 (A, G, and M), Lim homeodomain factors Isl1/2 (B, H, N), Lim1/2 (C, I, O), Lim3 (D, J, P) and Mnr2/HB9 (E, K, and Q). Early dorsal interneurons express both Lim1/2 and PTPγ (white arrowheads; C and F). Panel S is an enlarged, rotated version of R to compare with expression at HH26-27 (T). Square brackets demarcate the pMN domain (F, L, R, and S). In S and T: rp, roof plate; fp, floor plate; the p2 and p3 regions are demarcated in S; stars indicate motor horns Scale bar = 50 μm (A-S), 100 μm (T).
Fig. 2.
PTPγ-directed shRNA knock-down. A, immunoblots showing PTPγ protein in transfected 293T cells. PTPγ expression vector was co-transfected with either control shRNA Silencer vector (lanes c), or shRNA vectors Si1-6. Densitometry data are in Suppl. Fig. 9. Panels B-D show a HH22 neural tube electroporated with Si3 and GFP vector, after immunolabelling with 3A10 (B), or in situ hybridisation with either PTPγ (C) or NeuroM (D) riboprobes. E-P show loss of neurons after Si-3 treatment. Embryos were electroporated with Si3 at HH10-11 and fixed at HH18 (E-H), HH20 (I-L) and HH22 (M-P; the same embryo as in 2B-D). Brachial sections are shown, immunostained for Isl1/2 (F, J, N), Lim1/2 (G, K, O) and Lim3 (H, L, and P). Panels E, I and M show 3A10 staining and co-electroporated GFP; control embryos are in Suppl. Fig. 4. Scale bars = 50 μm. Ratios of labelled cell numbers on electroporated versus non-electroporated spinal cord sides are shown in Q-S. Black columns represent negative control shRNA treatments, grey columns represent Si3 treatments. Numbers within bars show the number of embryos used. Asterisks indicate statistical significance (P b 0.01; Student T-test). motor column (LMC) are being born (Holliday and Hamburger, 1977). Over 95% of LMC MNs are produced by stage HH24, followed by motor pool remodelling (Hollyday, 1980;Landmesser, 1978). PTPγ had been previously observed in the early spinal cord (Chilton and Stoker, 2000;Gustafson and Mason, 2000), but its precise temporospatial expression pattern was not defined. PTPγ mRNA expression was thus analysed in more detail at HH18, HH20 and HH22. At HH18, PTPγ mRNA colocalised only in dorsal, Lim1expressing interneurons, as previously described (Gustafson and Mason, 2000) (Fig. 1C and F). At HH20, expression is maintained dorsolaterally, but also spreads extensively ventrally (Fig. 1L). Expression is most prominent in lateral regions in maturing motor neurons and interneurons. Lesser expression is found throughout the mediolateral axis, including most ventricular progenitor cells. Expression at HH22 is similar ( Fig. 1R and S). Expression is present in the motor neuron progenitor domain (pMN) at HH20-22, but is relatively lower than in the progenitor domains immediately above (P2) and below (P3) (Fig. 1L, R, and S; Supp. Fig. 2). At E5 (HH26-27), there is much reduced expression in the motor horns (star, Fig. 1T), while dorsal interneurons and ventricular cells maintain expression.
PTPγ loss-of-function (LOF) and neurogenesis
Short hairpin RNA (shRNA)-and siRNA-mediated suppression of gene function is an effective technique in ovo (Harris et al., 2008;Katahira and Nakamura, 2003;Pekarik et al., 2003;Wakamatsu et al., 2007). RNA interference was used here to induce PTPγ LOF. Six shRNAs against PTPγ were cloned into pSilencer 1.0-U6 (Si1-Si6; Supp. Fig. 1). The sequences of these short hairpins had homology only to PTPγ when searched with BLASTN, thus they should be specific to PTPγ and no other PTP in the avian genome. To control for nonspecific electroporation or shRNA off-target effects, a GFP expression vector and a negative control pSilencer 1.0-U6 vector (Ambion; see Experimental methods) were used. Vectors were co-transfected with a full-length, wild type PTPγ expression plasmid in 293T cells, where the most effective ones, Si1, Si3 and Si6, consistently induced 80-90% knockdown of PTPγ ( Fig. 2A; quantified in Suppl. Fig. 3). As current anti-PTPγ antibodies do not work in immunohistochemistry, we used in situ hybridisation to visualise the Si vector-induced suppression of PTPγ mRNA in tissues (Fig. 2C). NeuroM, which is expressed in a pattern that partially overlaps with PTPγ, showed no reproducible alteration (Fig. 2D).
Vectors Si1, Si3 and Si6 were electroporated separately into HH10-11 neural tubes in ovo and examined at HH18, HH20 and HH22. Similar electroporations were performed with the control Silencer vector. This developmental window includes (i) early neuron production when there is little or no PTPγ expression (Fig. 1F), (ii) the onset of widespread PTPγ expression (HH20) and (iii) a point near the end of brachial MN production (HH22). In each electroporated spinal cord, neuronal numbers were counted on both the electroporated and contralateral sides, and a ratio of these was generated. The means of these ratios are graphed in Fig. 2Q-S, where comparison can be made to the negative control embryos. At HH18, electroporated cords had normal symmetry, neuronal patterns and numbers ( Fig. 2E-H and Q-S). At HH20 and HH22, many Si3-electroporated spinal cords exhibited morphological distortion, predominantly with atrophied ventral horns ( Fig. 2I-P). Immunostaining for Isl1/2, Lim1/2 and Lim3 confirmed that significant neuronal precursor losses began between HH18 and HH20, during the onset of widespread PTPγ expression ( Fig. 2Q-S). In contrast, control shRNA-electroporated spinal cords were predominantly normal, with only occasional, minor tissue distortion and no significant change in precursor numbers ( Fig. 2Q-S; Supp. Fig. 4).
HH18-HH22 coincides with the relatively late birth period of Lim1-expressing LMCl neurons in the brachial region (Holliday and Hamburger, 1977;Sockanathan and Jessell, 1998;Whitelaw and Hollyday, 1983). LMCl neuron precursors initially express Isl-1 (prior to HH22) and then briefly coexpress Lim-1 before extinguishing Isl-1 after cessation of mitosis (Tsuchida et al., 1994). At HH26-27 (E5), spinal cords electroporated with Si1, Si3 or Si6, still showed reduced motor horn size (38/87 visibly affected) ( Fig. 3A-F) and the total numbers of LMC neurons, labelled with Isl-1/2, were not significantly altered, although there was histological perturbation of the motor pool structure (see Cell adhesion and migration). Significantly, however, there was a deficiency in Lim1/2-expressing neurons of the LMCl and to a lesser degree the dorsal interneurons ( Fig. 3G) (Tsuchida et al., 1994). The less effective Si vectors Si4 and Si5 (n = 22, data not shown), and the control shRNA ( Fig. 3G), did not significantly affect the LMCl population. From the collective Si1, Si3 and Si6 data, we also believe that the effectiveness of the shRNA begins to wear off by HH22, due to plasmid dilution. For example, the numbers of Isl1/2 and lim3 positive precursors slightly recover by HH22 compared to HH20 (Fig. 2) and the same holds for phosphohistone labelling (Supp. Fig. 4). A similar effect is seen with MNR2/HB9 (data not shown).
Our data from HH18 to H27 thus indicate that PTPγ is a necessary factor for optimal generation or survival of spinal neurons, with MNs generated in the HH18-HH22 window being particularly sensitive under the technical approach used.
Reduction in mitoses
To investigate how spinal cord atrophy might be occurring, electroporated cords were screened for phosphohistone-3 levels. Control shRNA induced a small reduction in mitoses as indicated by phosphohistone-positive cells (Fig. 3N), but Si3-electroporated embryos had 40% fewer mitotic cells at HH20 (P b 10 − 3 compared to control) and 22% fewer at HH22 (P b 10 − 5 ) (Fig. 3N), indicating that PTPγ loss-of-function suppresses the mitotic rate in progenitors.
We addressed whether PTPγ action was likely to be upstream or downstream of Notch signalling, a primary regulator of proliferation versus differentiation in neural progenitors . Hes5-1 expression, a primary readout of Notch (Fior and Henrique, 2005), remains normal in Si3-treated tissues ( Fig. 4A-D). Moreover, homeodomain proteins whose expression is maintained by Notch 1 signalling, such as Pax6 and Nkx6.1 , are also normally expressed ( Fig. 4E and G). The effects of PTPγ signalling thus appear to be either downstream or independent of Notch. Fig. 3. PTPγ loss-of-function effects. Panels A-G demonstrate permanent loss of Lim1/2-expressing neurons. Embryos were electroporated at HH10-11 with either a GFP vector (A and B) or shRNA vectors (C-F) and developed to HH26-27 (E5). Brachial sections were immunostained for Isl1/2 (A, C, and E) and Lim1/2 (B, D, and F). Square brackets indicate the LMCl population of Lim1-expressing motor neurons (B). Treatment with either Si1 (E and F) or Si6 (C and D) results in relatively normal Isl1/2 expression, but reduced Lim1/ 2-expression in LMCl (arrows in D and F); "e" indicates electroporated side. Isl1/2-and Lim1/2-expressing cells were counted on both sides of each spinal cord and ratios generated (G). Means and standard deviations of these ratios are shown for embryos treated with control shRNA (dark bars), Si3 (grey bars for Isl1/2), or a combination of Si1, Si3 or Si6-treated embryos (grey bars, Lim1/2). The Lim1/2(MN) bars represent motor neuron counts in ventral horns only; Lim1/2(IN) bars represent dorsal interneuron counts (dorsal to white arrowheads in B and D). The numbers within columns show sample sizes. Asterisks indicate statistical significance (P b 0.01; Student T-test). Panels H-P demonstrate mitotic reduction and increased apoptosis after PTPγ shRNA treatment. Embryos electroporated with Si3 (I, J, L, and M) or negative control shRNA vector (H and K) (electroporated side marked e) were fixed at HH20 and HH22 and immunostained for phosphohistone-3 (H-J) or activated caspase-3 (K-M). Phosphohistone-positive cells were counted and graphed (N). Negative controls, black columns; PTPγ shRNA Si3, grey columns. Numbers in columns indicate sample size. Asterisks represent statistical significance (** P b 10 − 5 ; * P b 0.001). O and P show counts of activated caspase-immunostained cells (HH20, O; HH22, P); each spot represents one embryo, with open circles representing Si3 electroporated embryos and closed circles representing control shRNA-treated embryos. Arrows in L and M indicate caspase-expressing cells. Scale bars = 100 μm (A-F), 50 μm (H-M).
Increased apoptosis
Very low levels of cell death normally occur in the spinal cord at stages HH20 and HH22 (Homma et al., 1994) and this is prior to the main onset of caspase-dependent programmed cell death of brachial MNs at HH22 (Li et al., 1998). Control shRNA did not change the rate of apoptosis (Fig. 3H, K, O, and P). In contrast, activated caspase 3 expression was significantly elevated after Si3 treatment (9.7 fold increase at HH20 [p b 10 − 4 ]; 8.9 fold increase at HH22 [p = 0.005]) ( Fig. 3I, L, M, O, and P). The increased apoptosis accompanying PTPγ loss-of-function implicates this enzyme in sustaining the survival of progenitors and precursors in the early spinal cord.
Elevated PTPγ suppresses neurogenesis
To address further the role of PTPγ, gain-of-function (GOF) experiments were carried out using a wild type PTPγ (wtPTPγ) expression vector. Increased PTPγ expression caused prominent loss of ventral tissue ( Fig. 5A-H) whereas control electroporation did not (Supp. Fig. 4D-G). Unlike with LOF treatment, however, the dorsal tissue was more clearly and frequently atrophied after PTPγ GOF. Underlying these histological changes, MNs and dorsal interneurons were both significantly reduced in numbers (Fig. 5;Supp. Fig. 5). PTPγ overexpression did not alter Pax3 or Nkx6.1 expression (data not shown). The underlying basis for neuronal loss appears to be at least in part a reduction in mitosis again, as judged with phosphohistone 3
PTPγ modulates Wnt/β-catenin signalling
A possible target of PTPγ could be the canonical Wnt/β-catenin signalling pathway, since this is a major regulator of proliferation and differentiation in the cord, activated by Wnt signals in a high to low dorsoventral gradient (Megason and McMahon, 2002). β-catenin modulates cadherin-dependent cell-cell adhesion at adherens junctions of epithelia and also acts as a direct transcriptional regulator in complexes with TCF/LEF proteins (Clevers, 2006;Lilien and Balsamo, 2005). These nuclear and junctional activities of β-catenin can both be regulated by its tyrosine phosphorylation and several phosphatases have been implicated in this process (Lilien and Balsamo, 2005;Sallee et al., 2006).
Blockade of TCF/LEF signalling function in the chick spinal cord gives similar gross phenotypes to those seen after PTPγ perturbation (Megason and McMahon, 2002). To examine if PTPγ influences Wnt/ β-catenin signalling in ovo, we assessed TCF activity with a pTOPGFP reporter. With this vector, GFP expression reflects the binding of an activated β-catenin-TCF/Lef complex to 4 consensus TCF/Lef binding sites in the GFP promoter (Dorsky et al., 2002). pTOPGFP was introduced at HH11 along with either Si3, control shRNA vector, wtPTPγ vector, or a negative control plasmid. As expected, control embryos at HH20 showed a steep dorsoventral gradient of TOPGFP ( Fig. 6G) (Megason and McMahon, 2002). As judged by GFP/RFP ratios, Si3 did not significantly affect GFP expression (Fig. 6C, D, J-L, and P). In contrast, wtPTPγ strongly suppressed TOPGFP expression by at least 80% (Fig. 6E, F, M-O, and P). As this pTOPGFP vector was found to generate a very steep gradient and no GFP in the ventral cord, an alternative pTOPRFP vector was found that could be used to assess the ventral TCF activity level. Once again, widespread expression of wtPTPγ dramatically suppressed TOP-RFP expression and this was observable across the entire spinal cord (Fig. 6W-Y). Si3 expression had no obvious effects (Fig. 6T-V).
In the Wnt pathway, β-catenin is therefore a potential target for wtPTPγ. Tyrosine phosphorylation of β-catenin alters its ability to localise to cadherins and can also influence its transcriptional potential (Lilien and Balsamo, 2005;Sallee et al., 2006;Yan et al., 2006). We initially attempted to examine endogenous levels of β-catenin phosphorylation in HH22 spinal cords. However, the endogenous phosphorylation level was very low and we could not reliably determine whether PTPγ was altering this pattern. Similarly, antibodies to phosphoY654 and phosphoY489 (Rhee et al., 2007), were not sensitive enough to detect phospho-catenin in ovo. Instead, therefore, we tested whether the phosphorylation state of β-catenin at one of its regulatory tyrosines, Y654, influences β-catenin compartmentalisation between its active sites, the nucleus and adherens junctions (AJ), in spinal cord cells. Compared to cells expressing wild type β-catenin-GFP, we found that β-catenin-GFP fusion proteins with Y654-F mutations (mimicking a dephosphorylated state) (Murase et al., 2002), localised predominantly to AJ (Fig. 7A-D). In contrast, Y654-E . Total GFP and RFP signals were quantified and GFP/RFP ratios determined and graphed (P) (see Experimental methods). In P, each column contains the sample size; error bars represent SD (* P b 0.01; ** P b 0.001). Q-Y show similar assays with a more sensitive TOPRFP reporter (and GFP electroporation reporter), in HH22 spinal cords. TOPRFP reveals a full dorsoventral gradient of TCF activity, which is similar in control shRNA-and Si3-treated embryos (R and U, respectively). X shows almost complete extinction of TOPRFP signal after wtPTPγ expression. Z shows in vitro dephosphorylation of β-catenin protein by PTPγ. β-catenin was immunopurified from 293 T cells and incubated with either 50 μg or 100 ng purified human PTPγ D1/D2 catalytic domains. Samples were immunoblotted to detect phosphotyrosine and β-catenin. Whole cell lysate samples are also shown. Scale bar = 500 μm (A-F) 50 μm (G-Y). mutations (mimicking phosphorylation) caused the protein to localise in the cytoplasm and nucleus predominantly (Fig. 7GI-L). Tyrosine phosphorylation of β-catenin, at least at Y654, is thus capable of altering protein localisation and thus potentially β-catenin function in the cord. Interestingly, tyrosine phosphorylation of β-catenin may be in itself insufficient to alter signalling in these cells, since the Y654E mutation might be expected to have a dominant-active phenotype, inducing tissue hypertrophy, but it does not (data not shown). In a second approach, we tested whether PTPγ could dephosphorylate βcatenin in vitro. Tyrosine-phosphorylated β-catenin was immunopurified and treated in vitro with purified catalytic domains of human PTPγ (Barr et al., 2009b). WtPTPγ did indeed efficiently dephosphorylate β-catenin (Fig. 6Z). Thus, among the potential targets of PTPγ in ovo, β-catenin remains one candidate.
Cell adhesion and migration
In Si-treated embryos, the pMN region was commonly atrophied. In a third of HH26-27 (E5) and HH31 (E7) embryos treated with Si3, there were also striking abnormalities in mediolateral positioning of MN precursors. These abnormal cells became embedded in the ventricular zone, ingressing into the lumen in more extreme cases (Fig. 8A-D). Concomitantly, the ventricular tissue was much reduced or was missing (Fig. 8C, black arrow; D, black arrowhead). Although this was phenotypically similar to the loss of spinal ventricular tissue and the ventricular location of Isl1-expressing neurons in Notch1-deficient mice , we have already noted above that Notch signals appear to be unaltered in the Si3-treated chick embryos.
The medial mislocation of neurons occurred only in the pMN, not dorsally or ventrally. It was clearly visible by HH22 and thus must be initiated earlier. Approximately one third of HH22 Si3-treated embryos had misplaced neurons (Fig. 8E, F, G-I; asterisks in 8I), with ventricular and subventricular tissue already reduced in volume (asterisks in Fig. 8E and F, arrowheads in 8 H; Suppl. Fig. 6). Neuronal mislocalisation was also seen with Si1 (Suppl. Fig. 7), but not with the pSilencer negative control (Supp. Fig. 8), or another negative control shRNA vector pRFPRNAi-LacZ (Ark genomics; data not shown).
Optical sectioning also showed that most heterotopic neurons were GFP-expressing (48/49 misplaced neurons counted; Fig. 8G-I), indicating a cell-autonomous defect. There was no alteration in mitotic spindle angle distributions (Suppl. Fig. 6) (Zhong and Chia, 2008), suggesting that the balance of self-renewal and neurogenic fate was unaltered. Loss of progenitors through death, or their displacement by motor neuron precursors that fail to migrate, may instead be the most important contributing factors. In the neuroepithelium outside the pMN, PTPγ LOF also appears to perturb cell-cell adhesion. Nuclei of the neuroepithelium are normally elongated and well aligned mediolaterally, reflecting the overall shape and polarity of the cells (Fig. 8J, left of midline). The average angles of nuclei were measured in spinal cords treated with control shRNA or Si3. The spread of angles, represented as a standard deviation from a mean angle, reflects the degree of relative alignment across the population. PTPγ LOF increased the spread of angles significantly and also increased the number of unpolarised nuclei ( Fig. 8J and K), both a reflection of more random orientation. PTPγ is therefore required for movement of motor neuron precursors and, more broadly, the orderly cell-cell adhesion in the neuroepithelium.
Discussion
Tight control of the proliferation of neural progenitors allows for the generation of suitable numbers and patterns of neurons in the spinal cord. RPTPs such as PTPγ can now be added to the list of enzymes that play a part in controlling these events. After the onset of widespread expression of PTPγ between HH18 and HH20, this phosphatase plays several, potentially interrelated roles. When PTPγ expression is perturbed either up or down, the proliferation of neural progenitors reduces and, in the case of PTPγ LOF, cell death rates increase. At a molecular level one function of PTPγ may be to modulate Wnt/ β-catenin signalling in the spinal cord, possibly through phosphorylation of β-catenin or some other pathway target. The phosphatase also plays roles in maintaining neuroepithelial cell adhesion and polarity, as well as potentially facilitating the lateral migration of motor neuron precursors. The early avian spinal cord expresses several RPTP genes (Chilton and Stoker, 2000;Gustafson and Mason, 2000) and we now show that LOF and GOF in PTPγ generates defects in neurogenesis. This is intriguing given that PTPγ also acts as a growth suppressor in breast cancer cells (Liu et al., 2004) and can suppress differentiation in PC12 cells (Shintani et al., 2001). Although neurogenesis was normal in shRNAtreated LOF embryos prior to HH18, decreased neurogenesis and precursor numbers occurred thereafter, coincident with the onset of widespread PTPγ expression in progenitor and precursor cells. The ultimate neuronal deficits by E5 were largely reflected in a permanent loss of Lim1/2-expressing interneurons in the LMC. Although total neuron numbers, judged by Isl1/2 were little altered on average, many of these would have been born prior to HH18 (Holliday and Hamburger, 1977;Whitelaw and Hollyday, 1983). Moreover, the histological arrangement of the Isl1/2-positive neurons was clearly disturbed in many instances. We believe that the period of shRNA treatment between HH18-HH22 has targeted those neurons whose birth temporally coincides with that window, in particular with the LMCl population (Holliday and Hamburger, 1977;Whitelaw and Hollyday, 1983). Since we believe that shRNA effectiveness declines after HH22, it is likely thereafter that some compensatory recovery of non-lim1/2 neuron numbers occurs, although this remains to be demonstrated directly. The study therefore supports a critical requirement for PTPγ at least during the HH18-HH22 window of neurogenesis.
PTPγ and spinal neurogenesis
Although perturbation of PTPγ expression either up or down can suppress mitoses, this appears to occur through different mechanisms. With PTPγ GOF, suppression is most likely due to loss of TCF activity. TCF-driven transcription is critical for maintenance of spinal proliferation, downstream of Wnt/β-catenin signals (Megason and McMahon, 2002). PTPγ LOF in contrast does not alter TCF signalling, but instead induces distinct phenotypes. First, there is an increase in cell death in the progenitor and mantle zones. Second, we see a loss of mediolateral alignment of neuroepithelial cells and defects in lateral migration of motor neuron precursors. The latter observation indicates that PTPγ may be necessary for maintaining appropriate cell-cell adhesion in many cells after HH18. The increased apoptosis could be a consequence of this aberrant cell-cell adhesion and associated cell signalling, although a direct influence of PTPγ over cell survival is also possible. The experiments also showed that Notch signalling was not perturbed and neither was the pattern of mitotic spindle angles, although a straightforward relationship between spindle angle and cell fate in the spinal cord has recently been called into question (Wilcock et al., 2007). These data indicate that PTPγ may not be primarily affecting the balance of proliferation versus differentiation. Therefore, the decreased neurogenesis following PTPγ LOF most likely results from a perturbation of cell adhesion and progenitor tissue structure, and increased apoptosis; these could also be directly interrelated.
Defects in motor neuron development
The most striking effect of PTPγ LOF was the specific loss of pMN progenitor tissues in some embryos and concomitant mis-localisation of maturing MNs. This was not seen in other ventricular regions. It has been suggested that maturing MN precursors may provide feedback to progenitors, biasing them away from a MN fate (Pfaff et al., 1996). Isl-1 deficiency leads to premature MN death, where this feedback may be lost, resulting in compensatory MN production, progenitor exhaustion and the observed progenitor depletion (Chitnis et al., 1995;Pfaff et al., 1996). Although we do observe increased apoptosis associated with PTPγ LOF, this was not restricted to the pMN. Currently therefore, it remains to be determined why the pMN region is so sensitive to PTPγ LOF. For example, cells in the pMN could depend on very specific, cadherin-based adhesive functions that are perturbed by PTPγ. Alternatively, the pMN might be particularly sensitive to gene knock-down, since expression of the gene is lower there when compared to progenitors dorsally and ventrally ( Fig. 1S; Suppl. Fig. 2).
The medial mislocation of MNs is frequently observed after PTPγ LOF. This might arise simply through the physical depletion of ventricular tissue. Arguing against this are two factors. First, individual neurons were sometimes mislocated in otherwise normal-looking ventricular tissues. Second, non-electroporated neurons are rarely mislocated even when extensive progenitor tissue is missing, whereas mislocated neurons were nearly all GFP-expressing (Fig. 8). This indicates a cell-autonomous defect in MN precursor localisation. One potential role of PTPγ therefore is to regulate progenitor and precursor cell movement through cell-cell or cellmatrix interactions.
PTPγ and cell signalling
We have shown that PTPγ has profound effects on the generation, localisation and survival of neural precursors in the spinal cord. What are its likely molecular targets? The influence over cell polarity and migration could be at several levels. For example, in many cell types, tyrosine phosphorylation of cadherins and catenins control cell-cell adhesion (Lilien and Balsamo, 2005;Sallee et al., 2006). The regulation of cell-matrix interactions is also heavily dependent on integrins and associated targets of tyrosine kinases such as FAK and src (Mitra and Schlaepfer, 2006). Recent studies of the zebrafish tab mutation of laminin 1 demonstrates that integrin signaling and FAK activation is central to the control of interkinetic nuclear movement and neurogenesis in the neural tube (Tsuda et al., 2010). β-catenin is another candidate target. PTPγ GOF (shown here) and β-catenin LOF in mice and chick (Megason and McMahon, 2002;Zechner et al., 2003), all result in loss of ventral progenitor cells, potentially pointing towards a common basis of defective β-catenin signalling. Our data certainly show that PTPγ can suppress Wnt/β-catenin signalling through TCF in the spinal cord and that PTPγ can dephosphorylate βcatenin in vitro. Dephosphorylation of β-catenin phosphopeptides has also has been demonstrated previously (Barr et al., 2009a). However, in vitro assays of PTP specificity are notoriously unreliable and several PTPs are already known to target catenins in other cell types (Lilien and Balsamo, 2005;Sallee et al., 2006). In the chick hindbrain the RPTP PTPλ interacts with β-catenin directly, and its overexpression suppresses Wnt/β-catenin signalling and cell proliferation (Badde and Schulte, 2008). Regulation of the Wnt/β-catenin signalling pathway may be one of the shared, physiological functions of PTPγ or PTPλ in ovo, but such signaling could also be subject to a complex level of PTP redundancy. Such redundancy might explain why changes in TCF activation are observed only after PTPγ GOF, not PTPγ LOF. Alternatively, changes in β-catenin phosphorylation may be necessary, but not sufficient for TCF signalling. β-catenin phosphorylation has been shown previously to be insufficient to fully activate its nuclear function (Kim and Lee, 2001). Also, in our hands the phosphomimic Y564E β-catenin did not lead to full activation of the TCF pathway and tissue hypertrophy. Components of the Wnt/TCF pathway other than β-catenin must therefore be considered as PTPγ targets. For example, the tyrosine phosphorylation state of nuclear cofactors such as steroid receptor coactivator-3 can affect transcriptional activity of p300, a cofactor in β-catenin nuclear activity (Oh et al., 2008).
Other potential targets of PTPγ are the cadherins (Lilien and Balsamo, 2005;Sallee et al., 2006). Our observations of defective cell polarity and migration in ovo are consistent with defects in cell-cell adhesion, and MN migration does rely in part on cell-cell adhesion through cadherins (Price et al., 2002). Since cadherin function is also dependent on tyrosine phosphorylation (Lilien and Balsamo, 2005;Sallee et al., 2006), this remains an area of interest for further investigation.
In conclusion, this is the first demonstration of a role for PTPγ in the embryonic nervous system. PTPγ is a regulator of the proliferation and survival of avian spinal cord progenitors and neural precursors, playing potential roles in Wnt/β-catenin signalling, cell adhesion and the migration of motor precursors. It will be interesting to understand the role of this enzyme in more mature spinal neurons as well, since expression in these can be very high. PTPγ is one of a growing number of RPTPs, including PTPσ (Kirkham et al., 2006;Meathrel et al., 2002) and PTPλ (Badde and Schulte, 2008) that have roles in CNS neurogenesis. The actions of this enzyme family are therefore likely to be significant contributors to signalling cross talk with other known regulators of CNS growth and patterning.
Plasmids and silencing vectors
Plasmids pCAβ-GFP and pCAβ-RFP were provided by Jonathan Gilthorpe (Umeå University, Sweden) and the Renilla luciferase reporter vector pRL-SV40-renilla was from Promega, UK. Full length PTPγ cDNA was provided by Lu-Hai-Wang (NIMR, Mill Hill, UK) and was subcloned in-frame with 3xFLAG in p3xFLAG-CMV14 (Sigma Aldrich). This c-terminally tagged PTPγ cDNA was then subcloned into pCAβIRESGFP (gift of Jonathan Gilthorpe), for enhanced expression in ovo. The extracellular deletion of PTPγ was constructed by fusing the amino-terminal FLAG tags of p3xFLAG-CMV25 (Sigma) to amino acid 733 of chick PTPγ. Six, PTPγ-specific short hairpins were designed using Ambion algorithms. The sequences (Supp. Fig. 1) were checked using BLASTN in NCBI to avoid non-target homologies. Annealed oligonucleotides were ligated into the pSilencer1.0 U6 vector (Ambion, USA) and plasmids were named Si1-Si6. The negative control Silencer vector contained a random hairpin with no avian homologies according to BLASTN (Ambion, USA). The β-catenin-GFP fusion vectors were obtained from Addgene Inc. USA. The TOPGFP vector was given by Randall Moon (University of Washington) and TOPRFP was given by Nobue Itasaki (National Institute for Medical Research, UK). The Hes5-1 in situ probe vector was kindly given by Domingos Henrique (Fior and Henrique, 2005).
Cell culture and luciferase assays
Human embryo kidney 293T cells were cultured in DMEM (Sigma, UK) containing 10% Foetal bovine serum (Sigma, UK). Si vectors were co-transfected with the PTPγ expression plasmid and the Luciferase reporter, using calcium phosphatase methods. After 24 h, cells were lysed on ice in 0.25% TritonX-100, 150 mM NaCl, TrisHCl pH 7.6 with protease inhibitors (Complete, Roche). Lysates were processed for immunoblotting and luciferase assays. Twenty microlitre aliquots of lysates were assayed using the Luciferase assay system (Promega, USA.) and a Berthold Technologies luminometer LUMAT LB 9507 (Bad Wildbad, Germany).
Immunoblotting
Lysates were subjected to SDS polyacrylamide gel electrophoresis (PAGE) and transferred to PVDF membrane. After blocking in 10% milk powder/TBST (50 mM Tris pH 7.4, 150 mM NaCl, 0.2% Tween20) overnight, the filters were probed with primary and HRP-conjugated secondary antibodies diluted in 10% Milk/TBST. The bound antibodies were detected using ECL plus (Amersham, UK).
Immunodetection
For immunohistochemistry, slides were pretreated with 1% hydrogen peroxide in PBS for 20 min, then blocked in 4% BSA (Fraction V, Sigma) in PBS for 20 min. Primary and secondary antibodies were added for 1 hour each, with intermediate wash steps. Bright field and fluorescence microscopy was carried out using a Zeiss Axiovert, and recorded with a Hamamatsu Orca-1 camera and Openlab software (PerkinElmer UK).
To detect activated caspase-3 and phosphohistone, cryosections were placed in Declere (1x; Cell Marque Corp, California, USA) for 40 min with steam, then in freshly boiled Declere for 10 mins. Slides were washed in TBST pH 7.5 (100 mM Tris, 150 mM NaCl, 0.1% Tween 20), treated with 3% H 2 O 2 then washed again. Slides were blocked in 0.15% Glycine, 2 mg/ml BSA, 5% Goat serum in TBST for 30 min, then primary antibodies were added overnight. After washing, slides were incubated with secondary antibody, washed, then detection was performed with ABC (Vector Labs) solution and diaminobenzidine. For immunofluorescence studies, the secondary antibody was biotinylated anti-mouse, followed by streptavidin-linked Cy3.
In situ hybridisation
The chick PTPγ probe was generated as described (Ledig et al., 1999). The probe covered bp 2799-3144 of the cDNA (accession U38349) and has been shown to be specific (Chilton and Stoker, 2000;Ledig et al., 1999). RNA probes were synthesized according to manufacturer's protocols, using DIG-labelling kit (Roche Diagnostics, Burgess Hill, UK). Probes were denatured, hybridised overnight at 70°C and slides were then washed in 1x SSC, 50% Formamide, 0.1% Tween20 at 65°C followed by TBST. Slides were treated with anti-DIG antibodies according to manufacturer's protocols (Roche Diagnostics), followed by alkaline-phosphatase (AP)-linked secondary and standard AP detection. The chick NeuroM probe was provided by Marc Ballivet (Roztocil et al., 1997).
Cell counts and GFP quantitation
Labelled neurons were counted on each side of individual spinal cords and a ratio of the numbers was made, using at least two sections per embryo. Means and standard deviations of these ratios were calculated for each treatment group. PH-3-positive cells were counted in the electroporated and contralateral ventricular zones in individual embryos and a ratio per embryo calculated; ventricular zone length was reduced by less than 5% by electroporation. Means and standard deviations of these ratios were then calculated. For quantitation of TOP-GFP expression, the total pixel intensities of coelectroporated RFP (from pCAβ-RFP) and GFP (from TOP-GFP) were measured in the top 60% of each spinal cord section, using Volocity software (Perkin Elmer/Improvision); 2 sections were measured for each embryo and a mean value taken. After background removal, the final GFP/RFP ratio was made for each embryo. In all studies, a Student T-test was carried out to calculate statistical significance. Measurement of nuclear orientation was carried out using Openlab. Spinal cord sections were stained with DAPI and 25-50 nuclei per spinal cord side were measured from individual embryos. Means and standard deviations were calculated and a ratio of SD generated for each embryo.
Supplementary materials related to this article can be found online at doi:10.1016/j.mcn.2010.11.012 | 8,524.6 | 2011-02-01T00:00:00.000 | [
"Biology"
] |
Speech act analysis of David Mamet’s American Buffalo
Speech act is the usage of words and sentences that commits the speaker or the hearer to do something. Searle group’s speech acts into five categories, which are assertives, directives, commisives, expressives and declarativies. This paper aims at analyzing the categories of speech acts in the play ‘ American buffalo’ by david mamet.it aims to find the types and the functions of the speech acts and the difference between them.This is done through analyzing four texts of the characters’ dialogues.through which we can understand the sophisticated personalities of the different characters. The analysis is carried out from a pragmatic point of view, mainly on the basis of the typology prposed by Searle (1969). characters. Uually the speech acts fluctuated depending on the characters attitudes and their psychological states.In general the context and the nature of the relationship between the characters plays a prominent role in choosing a specific form of an act. In most cases the speech acts were performed successfully in the sense that the speaker usually carried out the act as in text(3) where the perlocutionary act is clear when the harm was actually done to the hearaer. Some of the acts were used indirectly depending on the condition.The first four categories of the speech acts namely” Assertives, directives, Commisives, expressives were frequently found in the play with different forms and function. Acts that belonged to declaratives the fifth category nonetheless were not found at all in the play.
Introduction:
When people communicate, they rely on their language lexicon and the sets of linguistics rules. Most people are aware of the of language rules like the ones that determine the way in which linguistic elements as letters, words are combined together to form bigger constituents as phrases and clauses (syntactic knowledge), and also how words and larger structures have meanings(Semantics knowledge). However, communication between people is not a simple task and it requires serious collaboration from any parties involved in the conversation (Birner, 2012:1). In order to arrive at more than Modern books have two distinct views on pragmatics; on the one hand, there are those that associate pragmatics with speaker meaning, and those who relate it with utterance interpretation. However, both of these views have their shortcomings, since those who equate pragmatics with speaker meaning focus more on the social aspect of language use and the role of the speaker and pay less attention to the fact that meaning can be interpreted on several levels. Those who define it as utterance interpretation; are more concerned with the cognitive aspect and focus more on the role of receivers of the message and ignoring the production of the utterance (Thomas,1995:2). One of the main concepts that pragmatics sheds light on is speech act theory.
Speech Act Theory
Speech act theory, also called "How to Do Things with Words Theory", is a philosophical approach to language, based on large scale on J.L. Austin's (1962) and John Searle's (1969) works. Bliss (1983) states that speech act theory, came about as a disagreement to the traditional philosophical approach which viewed sentences as detached from context and focused only on its truth conditions; i.e. their truth or falsity.
The very term 'speech act' was possibly, coined by the German linguist Buhler in 1934Lyons (1977. The theory as the name suggests deals with Speech Acts or as Levinson states (1991: 259) "action-like properties of utterances". Hurford and Heasley (1990: 239) define Speech acts as words or sentences that are employed to do things; which are of social importance and not to merely describe the world.
Speech-act theory, as used in the philosophical tradition, can be key for better comprehending language, since Speech-act theory is stronger than the prior tendency to think just in terms of separate propositional truths (Poythress,2008:16). Many philosophers considered the importance of the speech act theory fascinating, for a number of reasons. Lyons (1977:725) found the theory's importance in linguistics for its capability of giving "explicit recognition to the social or interpersonal dimension of language-behavior and provides a general framework". Van Dijk (1980) maintains that the philosophical and linguistic theory of speech acts is of great prominence among other basic concepts of pragmatics; and its concern has mainly been an abstract study of the illocutionary sides of language use. For Leech 1983 (x), speech act theory's influence on pragmatics is estimated as the strongest stating "Up to now, the strongest influences on those developing a pragmatic paradigm have been the formulation of a view of meaning in terms of illocutionary force by Austin and Searle" Despite its philosophical original, speech act theory eventually made its way into other areas of stud. It became applicable in linguistics, namely in syntax, in semantics, pragmatics, and even in sociolinguistics (Kock, 1997:14). Other areas where speech act theory sparked interest as Levinson (1983, 226) states include the field psycholinguistics where speech act theory is used as one of the necessary tools for language acquisition. It has also been taken up by literary critics to better understand the literary genres; anthropologists used it to find the magical spells and rituals in linguistics Levenson states that the theory has been applied to problems in syntax and semantics.
Austin's Speech Act Theory
Although in the sixties most of the focus and work of linguistics was mainly on syntax; within the framework of Chomsky's development of transformational grammar, there were few philosophers that worked on the semantics branch. It is believed that the theory was first foreshadowed by the Austrian philosopher Ludwig Wittgenstein's views in 1953; who claimed that the meaning of words is to be found in its use (Kock, 1997:3). However, Yuan Austin's works triggered interest in what is now called pragmatics, although there were other famous philosophers as G. E. Moore and Wittgenstein during Austin's time; who also contributed to pragmatics. Austin's work was more influential due to four factors :firstly because the emergence of his collection lectures 'how to do things with words' was on time and in line with the growing disapproval of truth conditional semantics view. Second his work was comprehensible and thirdly despite the change and readjustment he made to his works , the main line of thought remained and finally his works indicates other important matters in pragmatics today (Thomas,1995:28).
On the basis of the concept that language is used to perform actions, Austin classified speech acts into constatives and performatives .Performatives are those utterances that change the state of the world somehow by performing a kind of action; not just to state something that can be either true or false. Constatives, however, are merely statements of fact, or declarative utterances expressing some state of affairs (Smith, 1991;Sadock, 2007).
Austin considered the acceptable convention and rules that follows for performatives to be successful, because although performatives cannot be true or false they could go wrong, be infelicitous or unhappy. These conditions for a speech act to be successful are called felicity conditions. Austin gave thses felicity conditions: A. (i) there must be a conventional procedure having a conventional effect .
(ii) The circumstances and the persons be must be appropriate B. the procedure must be executed (i) correctly (ii) completely C. often(i)the persons must have requisite thoughts feelings and intentions as specified in the procedure (iii)if the consequent conduct is specified, then the relevant parties must do it (Levinson, 1983:229) .
Austin also made a three-folded contrast between three types of acts that occur when language is used, they are characteristics of most performative and constatives too (Horn and ward, 2004): 1. Locutionary Act: Austin (1962, 108) states locutionary act is the uttering of a specific sentence with sense and reference. Illocutionary Act: This type of act is related to speakers' intentions and motives i.e. asserting, questioning, warning, requesting, and giving commands, threatening.
3.
Perlocutionary act: This type of act deals with what is gained by the performance of a speech act. While the illocutionary act is speaker concerned, the perlocutionary act is hearer based; just as the illocutionary acts have illocutionary force, perlocutionary acts has a perlocutionary effect on the hearer (Birner, 2012:187).
In addition, Austin (1962:151) sat up five categories of speech act based on illocutionary force as folwws: 1. Verdictives: They are acts in which a verdict or appraisal is given, usually by someone in a position of power to give that appraisal.
Exercitives
They involve the exercise "of powers, rights, or influence." 3.Commissives They commit the speaker to an action or intention. 4.Behabitives These acts have to do with social behavior, including "apologizing, congratulating, commending etc.
Expositives
These are acts that plain how our utterances fit into the course of an argument or conversation, how people are using words, or, in general, are expository.
Many scholars in also contributed to the development of speech act theory after Austin such as Strawson (1964); Grice (1967); Searle (1969Searle ( , 1976; Benjamin (1976); Davison, Wachtel, Spielman, etc. (1971) (Kock,1997:3). John Searle, a major proponent of the speech act theory, inherits his ideas from Austin and elaborates on some of them but develops the theory in his own style.
Searle's Account of Speech Act Theory
Although Austin's theory was taken for further elaboration by several theorists most importantly by Searle, Zaefferer (2001) states that Searle's formalized Speech Act theory has become something classic, at least among the majority of linguists. He adds although his five-fold classification has been criticized many times; alternatives have been proposed, nonetheless; until this day, it continues to be the most widely accepted one. 1415 Searle's development of Austin's work comes mainly from his most important works namely Searle (1969Searle ( ,1979 and Searle and Vanderveken (1985) .Searle's works in (1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979) started where Austin's had finished off; and his earlier works had focused on trying to put Austin's ideas into a unified and systemized theory through a number of contributions (Smith,1991:3).
Unlike the precedent studies of language that have considered words, sentences, morphemes etc. as the basis of investigation, Searle suggests that language should have been studied with reference not to linguistic types or tokens, but with reference to certain actions; i.e. illocutionary acts (Doerge, 2006:72). In other words, the assumption of his speech act theory is that the minimal unit of human communication is not sentences but performing illocutionary acts as suggesting, commanding, requesting etc. (Searle, Kiefer and Biervich, 1980:5).
Searle's perspective was somewhat different from Austin's, since he did not approve of Austin's distinction made between locutionary, illocutionary and perlocutionary acts. He does not disagree with perlocutionary and illocutionary act, but divides locutionary acts into two other types of acts namely utterance act and propositional act.
Sealer's focus was mainly on the description of illocutionary acts, but he does not give a straight and forward definition of speech acts,the closest definition is that they are the basic or smallest unit of all linguistic communication Searle(1969, 16) .Instead he(1969:24) describes them by listing the subtypes of speech acts: (A) An utterance act by uttering words, morphemes, or sentences. (b) A propositional act (by referring and predicating), (c) An illocutionary act (by questioning, stating, requesting, etc.), (d) A perlocutionary act (by achieving some effect on the actions, thoughts, etc. of his/her hearer).
These acts are performed in accordance with rules (Searle, 1969: 16, 24-25, 37) they are not separated but happen at once. When one preforms an illocutionary act, he also performs a propositional act and an utterance act. A distinction was also made between propositional and illocutionary acts, for instance the following examples have the same propositional act but have different illocutionary acts:
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
1. Sam smokes habitually. 2. Does Sam smoke habitually? 3. Sam, smoke habitually! 4. would that Sam smoked habitually! (Searle1969: [22][23][24] Since all of these examples refer to the same person 'Sam' and all of them have the same proposition (content) which is 'Sam smokes habitually' all of the sentences perform the same propositional act (Tiesma, 1986). However, in each of these utterances, the speaker has a different intention (force or function), that is in 1) the speaker performs an assertion, 2) is a questions, 3) is an order, in 4) is a type of wishfully s request. The reference and the predication appear in different places in each utterance and in each utterance, a different speech act is attempted.
Searle also tackles some other notions such as rules, propositions and meaning. The rules of a language give meaning to sentences and help the speakers to send messages, which are in return understood by hearers. He argues1969 41) that "speaking a language is engaging in a rulegoverned form of behavior" and "performing acts according to rules". Searle called such rules constitutive rules, which usually make up part of the activity and cannot be separated from it. For example,in the rules of the game of chess the rules are constative of the game itself (Schiffrin,2005:44).Searle contrasted the constitutive rules with regulative rules which "regulate our linguistic behavior", Regulative rules are those that regulate activities that are already in use (Fotion,200:23). For example a car can still be driven without abiding by traffic regulations ,but it is not possible to drive it without starting the he engine, pressing in the clutch, etc (Schiffrin,2005:45).
When it comes to the notion of Meaning, Searle tries to answer questions like what is it to say something and mean something ? and what does it mean for something to have meaning? Attempting to answer such question, he borrows Grice's definition of the term 'meaning' that states : "To say that a speaker meant something by X is to say that he/she intended the utterance of X to produce some effect in hearer by means of the recognition of this intention" (1969: 43). However, Searle considers such notion to be defective since it does not show the connection between ones meaning of something and how it actually means. 1417 Searle and Vanderveken (1985:53) also introduce a language feature which is direction of fit arguing that there are "four and only four" possible directions of fit for any utterance. Here is their account of direction of fit: 1. The word-to-world direction of fit: In achieving success of fit, the propositional content of the illocution fits an independently existing state of affairs in the world.
The double direction of fit:
In achieving success of fit, the world is altered to fit the propositional content by representing the world as being so altered. 4. The null or empty direction of fit: There is no question of achieving success of fit between the propositional content and the world, because in general success of fit is presupposed by the utterance.
2.2.1Searle's felicity conditions:
Guided by constitutive and regulative rules of language use, Searle (1969) also suggests felicitous conditions that are different from the ones proposed by Austin. Searle's felicity conditions are not dimensions on which utterances can be successful or unsuccessful rather they are "constitutive of the various illocutionary acts".The conditions that Searle outlined are the following (1969: 54-71) : 1. Propositional content conditions: These refer to the constraints put on the content by the performance of a felicitous illocutionary act such as tense or subject of utterances. For instance in the case of promises, the content must refer to a future action; it is not possible to say 'I promise to have done it by last week' (Schifrin, 2005:48).
Preparatory conditions:
These are the presuppositions that is made about the illocutionary act, which is usually "peculiar to illocutionary force". For instance, when the speaker promises something, it usually presupposes that he/she is able to fulfil that promise. 3. Sincerityconditions: these conditions indicate that the speech act performed is in line with what the speaker believes, intendeds or feels, for
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
1418 example the speaker intends to fulfil his promise or believes what he/she asserts. 4. Essential conditions. Searle explains this condition in terms of intention since speech acts are preformed intentionally. Fotion (2000) states that this condition deals with what 'counts as' for instance 'request' counts as an attempt to make addressee to perform an action, a 'promise' counts as putting an obligation on the speaker to do an action, etc.
Searles Taxonomy of Speech Acts
Searle criticizes Austin's taxonomy in that there is a widespread confusion between verbs and acts, and that not all the verbs are illocutionary verbs, the categories overlap too much, and there is much diversity within the categories. Many of the verbs listed in the categories also do not match the definition given for the category and, most prominent defect is that there is no consistent principle of classification (1979:11-12).
Searle taking into consideration four basic dimensions: illocutionary point, propositional content and its direction of fit and expressed sincerity conditions, as the basis for constructing his alternative classification that are roughly five groups, Searle (1975: 356-364) presents them as follows: 1. Assertive: Members of this class are assertions that represent the state of affairs. The point or the purpose in performing Assertives is to commit the speaker to the belief or to the expressed propositional content. All of the members can be evaluated in terms of truth or falsity.
Directives:
The illocutionary point of members of this category are attempts by the speaker to make the hearer do something. The attempts can vary in strength they could be mild, or strong. The direction of fit is world to word the sincerity condition is wanting (or wish or desire).
Commisives:
Commisives are illocutionary acts whose point is to make the speaker responsible for some future action. The direction of fit is world-to-word and the sincerity condition is intention. The propositional content is again usually that the speaker does some future action.
Expressives:
The speech acts whose illocutionary point is demonstrate the speaker's psychological state to some former action or state of affairs.
1419
Expressive verbs are thanking, congratulating, apologizing, condoling, deploring, and welcoming. Expressives lack direction of fit. This means that performing an expressive, the speaker is neither trying to get the world to match the words nor the words to match, the world, rather the truth of the expressed proposition is presupposed.
Declaratives:
These are acts when performed successfully they bring into being a state of affairs, creating immediate changes in the world when the speaker utters such acts thus creates a correspondence between the propositional content and the world. Thus, Searle's five classes can be shortly summarized as: 1) "Tell people how things are", 2) "Try to get them to do things", 3) "Commit ourselves to doing things", 4) "Express our feelings and attitudes", 5)"Bring about changes through our utterances" (Ballmer and Brennenstuhl, 1981:56).
The following are examples of the corresponding five types of speech acts (Huang, 2006:106-108): The soldiers are struggling on through the snow. Turn the TV down. I will be back in five minutes. Example I'm so happy. We find the defendant not guilty.
Speech Act Theory and Drama
At first, it was usual for the speech act theory to be applied to language exclusively; later it gradually became more common practice for its application on literature genre. Pratt (1997) admits that the first attempt for the application of speech act theory on literature was made by Ohman 1971.Pratt's views are different from Ohman's. Ohman's belief is that since not all of Austin's felicity conditions can be applied to statements within works of literature, and ordinary language differs from literary language, therefore they should be called "quasi speech acts". Pratt especially in Traugott and Pratt (1980) disagrees with Ohman. Following Partt, many researchers applied speech act theory to different genres of literature. (Abbas, 2011:14) Koten (2012:174) states, "Literature cannot imitate reality directly, it can perfectly imitate an utterance about reality". Thus, a fictional utterance might have form of assertion even if it is not actually an assertion itself. Accordingly; authors of fictions can get special effect; although, readers of fiction know about fictitiousness of a speech act, they read the fictional story as if it was real. So, when readers read a novel, or a drama they could in their imagination treat the circumstances as if they were real. According to Thornborrow and Wareing (1998) since plays exist in two forms i.e. as text and on stage; there has been some issues for researchers and they require different approaches. Some critics believe that since plays are written for performance on stage, they can be understood only in theater. Others have found it easier to focus on the written texts than its performed version when analyzing; since the language of plays consists of turns or dialogues among characters of the text. In addition, linguistic analysis of drama can show that plays contain very rich instruction for their performance which Searle (1975:328) states are "directions given by the writer of the play for the actors" as to how to enact a pretense, which the actors then follow. Therefore, understandings of plays can be accomplished through 'mere reading" (Meek and Short, 2007:7). Austin (1962:22) in his speech act theory, excluded literature and drama in his analysis stating, "performative utterance will fail if uttered on stage by an actor and further adding speech acts used in literary works are "void". Later, he acknowledged that the dramatic communication happens through a language that looks like real world conversations. Bliss (1983:16) states that Speech act theory has been used to define fiction itself and to tackle certain literary texts. It has especially been beneficial to viewing texts as a communicative act and not just merely as only in and of itself. Speech Act Theory of literature in its entire genre, highly values the context in which it is uttered. Speech Act theory is highly noticeable in bringing together language and literary thoughts and goals, the set of concepts included in Speech Act Theory are applicable to the different kinds of literary works as novel, drama, poetry, and so on.
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
According to Brown andLevinson (1987, 1987:10) since Speech Acts, analysis is a sentence-based, speaker-oriented type of analysis; it could provide a lot of information when applied to a speaker-oriented genre like plays. A character's ability to use performative language is often an indicator of how much power s/he has in the play. Many critics have analyzed Renaissance drama holding such views particularly tragedies.
Van Dijk (1977:5) argues, "literature constitutes a speech act on its own." because a literary text generally is made of multiple sentences and such sentences can be taken as a possible speech act. Koten (2012:175) states that the character's interactions in a play imitate authentic speech acts, such as assertions, warnings, promises, requests, orders, verbal expressions of states of mind and emotions. Mamet's drama discourse is usually infamous for informal discourse and usage of slang as a way of expression of strong feelings. Some critics compared him to Eugene O'Neill, especially in his skill to make believable the speech patterns of usual street life. Although some critics believe there isn't much action in his plays; all speech in Mamet's plays are sort of action or speech acts; i.e. characters in his plays usually admit, deny, offer, accept, deceive, sell, plead, reveal, and conceal using language. In this fashion, Mamet more than any contemporary playwrights tries to convey actions through the characters dialogue using language (ibid).
Mamet's Language
Wahtely (2011) argues that Mamet's language of writing is both minimalist and poetic. His language is considered minimalist in the sense that Mamet usually uses very few words to convey the message and poetic in the sense that he is able to add poetic rhythm to normal street conversation, which is characterized with much profanities, slurs, and insults.
Mamet is famous for writing two types of plays: the social/urban plays, which usually happens in a business-like environment; where the characters are in constant competition with one another; the second type is the domestic/rural play, which is usually set in the outer setting or home and is concerned with persons trying to communicate by meaningful communication. Accordingly, some refer to the language of social/urban plays as realist and the domestics/rural as poetic (whately, 2011:19).
American buffalo is his two-act play that revolves around three characters namely Don, Teach a Bob. The first act occurs in "Don's Resale Shop," which is a junk store run by Don Dubrow who is the play's protagonist. The second act occurs at 11:15 that evening and Teach has not yet arrived. Don is also unable to reach Fletcher, whose phone line is busy as well.
Methodology:
This study relied on a descriptive qualitative method to analyze the speech acts. The data were in the form of utterances taken from the play. The source of the data was the script David Mamet's play. The analysis of the play
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
1423 is carried out using Searle's five typology of speech acts, which are applied to the speech acts identified from the speech of the three main characters.
Analysis of Assertive Speech Acts
The first analysis will be of the class of the Assertives which are the first group distinguished by Searle. According to Searle's theory of speech acts; the category of Assertives have the illocutionary point or purpose to commit the speaker to the truth of the expressed proposition to something he believes to be the case or not (Searle 1979: 2). Assertive speech acts express the speaker's belief and his intention or desire that the hearer form similar belief. An utterance that asserts a thing that can be judged as true or false. The illocutionary point of an assertive act focuses on persuading the hearer to form a parallel belief. The mode of achievement and the propositional content condition are neutral.
The preparatory condition is that the speaker has reasons or proof for the truth of the propositional content. The sincerity condition is that the speaker believes the propositional content. The degree of strength is neutral. This group contains most of Austin's (1962) expositive and many of his verdictives, e.g. suggest, put forward as a hypothesis, insist, swear, stating etc. Text (1) Don: and he's no dummy, Teach Teach: far from it. all I'm saying, the job is beyond him. Where's the shame in this? This is not jacks, we get up to go home we give everything back. huh? you want this fucked up? Pause All I'm saying, there's at least chance something might fuck up, you'd get the law down, you would take the shot, and couldn't find the coins whatever: if you see the least chance you can't afford to take the chance! Don? I want to get in there and get thus motherfucker. Don? where I the shame in this? Context: Teach and Don are at the shop talking about Bobby and whether to send him in for the business or not.
Discussion:
Don has decided to include Bob in the business which is sending him to steal back the nickel from the man who bought it with a low price from Don's
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
store the previous day. This decision does not appeal to Teach very well; therefore, he explains lengthily to Don that Bob is not qualified enough for the mission and that he might have his own hidden motive. Instead he wants to go for the stealing. Teach first states that "the job is beyond him" which expresses his proposition of Bobby not being incompetent for the robbery. In uttering such sentence, he aims to create a similar belief on Don. Following Searle an assertive act is one that asserts something which can be judged as true or false and focuses on forming a parallel belief on the hearer. Therefore, Teache's utterance counts as an assertive act that has the function of a persuasion. Later he says "All I'm saying, there's at least chance something might fuck up"; this utterance has the proposition that counts as a prediction to some future action. Teach predicts some possible bad future action might occur if Bob went in instead of him. Teache's second utterance counts as an assertive act here functions as prediction.
Analysis of Directives
The second type of the analysis in the play is directives as it complies with Searle's second class in his taxonomy of speech acts: They are speech acts by which the speaker asks the hearer(s) to do or not to do things. The speaker[s] perform directives with the intention of committing the hearer to a future action, usually to make the world fit the words through the hearer (Jucker and Taavitsainen, 2008), 2008:88). The speech acts that are listed in this group by J R Searle (1969) include requesting, questioning, ordering, commanding, suggesting, urging, inviting etc. . The class also includes many of Austin's exercitives are also in this class.
According to Searle and Vanderveken (1985:55) directives in general have the propositional content condition of some future action of the hearer. They also have the preparatory condition that the hearer is able both physically and mentally to carry out the action he needs to do. Therefore, directives in general have the sincerity condition that the speaker wishes/wants something from the hearer with varying degrees of strength according to their illocutionary forces. Lastly, directives usually make a reason of why the hearer does the something he is directed to. They can be realized by imperatives and subjunctives. Furthermore, indirect requests can be expressed by interrogatives and declaratives (Jucker and Taavitsainen, 2008),
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
1425 2008:88). In addition, they can be used for quite a number of illocutionary acts, ranging from order or command to plea, advice, offer, suggestion and wish. The propositional content is always indicated by hearer's some future action. Text (2) Don: well, that very well maybe. Bob, but the fact remains that it was business. that is what business is. Bob: what? Don: people taking care of themselves Bob: no Don: because there is business and there is friendship. Bobby there are many things, and when you walk around you hear a lot of things and what you got to do is keep clear who your friends are, and who treated you like what.
Context:
Don and Bob talk about Ruthie and Fletcher. Fletcher bought off an object that Ruthie owned, he got it with a really low price which led Bob to think it is stealing.
Discussion:
The dialogue above occurs between Don and Bob, Don is instructing Bob about the importance of business and the difference between friendship and business. Bob reveals to Don that Fletcher stole a pig iron(an object) from Ruthie despite them being friends with one another, in reality Fletcher cheated Ruthie and bought it really inexpensively. This leads Bob to consider such act same as stealing. Don, however considers what Fletcher did a business and nit staling. He assumes that this is the way people care for themselves. As wrong as it may sound, he advices Bob to follow the same pattern. He advices Bob that what he should do is distinguishing between business and friendship. His sentence "you got to do is keep clear who your friends are" has a directive meaning because it contains the phrase 'got to' therefore functions as an advice. Searle (1979) lists advising as a directive speech act, because the point of directives is urging the hearer to do some sort of action. Accordingly, Bob is advised by Don is urged to be aware of telling apart friendship and business and recognizing his true friend. Dons
Analysis of Commisives:
The last part of the analysis is Commisives that are the fourth group in Searle's taxonomy: According to Searle (1979), these are speech acts that their successful performance commits and puts the speaker under the obligation to bringing about the truth of the expressed propositional content. Hancher (1979) states that although Searle dose not talk about the variation in the degrees of commitment, they do here vary in somehow, as in the difference between carrying out something through promising or guaranteeing.Radhi( 2017) states that this category includes different verbs such as offer, promise, refusal, pledge, threat, vow, swear, acceptance. According to Vanderveken and Kubo (2001: 34) and Mey (1993: 164) the speaker is the one that usually carries out the future action by which the world is made to match the expressed proposition of the utterance.
Text (3)
Teach: I want for you to tell us here and now (and for your own protection) what is going own, what is set up where Fletcher is and everything you know. Don: I can't believe this Bob: I Don't know anything Teach: you Don't? Bob: no Don: tell him what you know, Bob Bob: I Don't know it, Donny. Grace and Ruthie Teach grabs a nearby object and hits Bob viciously on the side of the head. Teach: grace and Ruthie up your ass, you shit head; you Don't fuck with us, ill kick your fucking head in (come in here with your fucking stories) Context: Don and Teach are outside at midnight waiting for Fletcher to come; instead, Bob shows up and is being secretive and Teach is not taking it well.
Discussion:
Don and Teach intend to carry out the robbery alone nut to their surprise Bob comes back to where Don and Teach are waiting for Fletcher. Bob is behaving suspiciously; Don and Teach come to the conclusion that he
Vol. (4), Issue (4), Fall 2019 ISSN 2518-6566 (Online) -ISSN 2518-6558 (Print)
1427 is hiding something from them which makes Teach particularly angry. Teach is indirectly threatening Bob that he will hurt him; if does not cooperate when he states "for your protection". According to Searle, (1969) the speech act of threatening is also considered a commissive speech act, whose illocutionary point or purpose is to express a future penalty for the hearer under a certain condition. So as to encourage the hearer not to make that condition true .When Bob didn't do as told, Teach indeed committed himself to the act of doing harm to Bob as it's clear that he hit Bob on his head when Bob didn't cooperate with them. Teach preformed a commissive act, which functions as threat.
2.6.4Analysis of Expressives:
The third group that is analyzed are expressives, they are the third class of Searle's proposed taxonomy of speech acts: They refer to the expressed feeling of the speaker either about themselves or about the world (Searle 1976: 12). i.e. Expressive speech act verbs usually are there to express good or bad evaluations, and they are hearer-oriented Examples of expressive speech act verbs are e.g. apologizing, consoling, congratulating, lament, praise, greet or welcome (ibid.). In performing expressives the speaker is neither trying to get the world to match the words nor the words to match, the world, rather the truth of the expressed proposition is presupposed (Searle,1975:256-657).
Text (4)
Teach: and tell him he shouldn't say anything to Ruthie Don: he wouldn't Teach: no? No, your right.im sorry, Bob. Bob: it's okay Teach: I'm upset Bob: its okay, Teach. Pause Teach: thank you. Bob: you're welcome.
Context:
Don, Bob, and Teach are at the store, Don wants to send Bob to fetch some breakfast, Teach says something to Bob to which he soon apologizes.
Discussion:
In the above exchange of speech between Don, Bob, and Teach, Teach thinks if Bob goes to the diner to get food for him and Don, he might inform Ruthie about Teache's place. He doesn't directly say to Bob not to say he is at Dons shop, but asks Don to tell Bob. Although Don affirms that Bobby wouldn't, he is suspicious; thus stating jokingly asks "he wouldn't?.
Then realizing that what he said was wrong he expresses his regret and apologizes to Bob. Searle and Vanderveken (1985:16) state that usually a speaker apologizes for something he/she did or feels responsible for doing. Teach realizes what he said was insulting to Bob thereof he felt the responsibility to apologize followed by the excuse that it was his anger led him to do so. Bob accepts his apology and Teach shows his gratitude by thanking him. Following Searle (1969:65) thanking is an act performed by the speaker in response to a past act conducted by the hearer which is usually in hearers favor. It can be said that both of Teache's utterances carry out the execution of the expressive speech acts; his first utterance functions as an apology and his second functions as thanking
Analysis of Declarative:
These are acts when performed successfully they bring into being a state of affairs, creating immediate changes in the world when the speaker utters such acts thus creates a correspondence between the propositional content and the world . Because most of them need extra linguistics institution for their performance, they sometimes are referred to as "institutionalized performatives" (Huan, 2006:108). Their successful performance brings about a fit, therefore the direction of fit is both words-toworld and world-to words. There is no sincerity condition It is worth mentioning only the four categories of searle's taxonomy could be found in the play ; .examples of the category of decalartives were not found.
Conclusions:
Based on the previous data analysis, it can be concluded that there are different forms and functions of the speech acts that are employed by the characters. Uually the speech acts fluctuated depending on the characters attitudes and their psychological states.In general the context and the nature of the relationship between the characters plays a prominent role in choosing a specific form of an act. In most cases the speech acts were performed successfully in the sense that the speaker usually carried out the act as in text (3) where the perlocutionary act is clear when the harm was actually done to the hearaer. Some of the acts were used indirectly depending on the condition.The first four categories of the speech acts namely" Assertives, directives, Commisives, expressives were frequently found in the play with different forms and function. Acts that belonged to declaratives the fifth category nonetheless were not found at all in the play. Bibiliography: | 8,934.8 | 2019-12-26T00:00:00.000 | [
"Linguistics"
] |
Three-body contact for fermions. I. General relations
We consider the resonant Fermi gas, that is, two-component fermions in three dimensions interacting by a short-range potential of large scattering length. We introduce a quantity, the three-body contact, that determines several observables. Within the zero-range model, the number of nearby fermion triplets, the large-momentum tail of the center-of-mass momentum distribution of nearby fermion pairs, as well as the large-momentum tail of the two-particle momentum distribution, are expressed in terms of the three-body contact. For a small finite interaction range, the formation rate of deeply bound dimers by three-body recombination, as well as the three-body contribution to the finite-range correction to the energy, are expressed in terms of the three-body contact and of a three-body parameter. This three-body parameter, which vanishes in the zero-range limit, is defined through the asymptotic behavior of the zero-energy scattering state at distances intermediate between the range and the two-body scattering length. In general, the three-body contact has different contributions labeled by spin and angular momentum indices, and the three-body parameter can depend on those indices. We also include the generalization to unequal masses for $\uparrow$ and $\downarrow$ particles. With respect to the relation between three-body loss rate and number of nearby triplets stated in [Petrov, Salomon and Shlyapnikov, PRL 93, 090404 (2004)], the present work adds a derivation, expresses the proportionality factor in terms of the three-body parameter, and includes the general case where there are several contributions to the three-body contact and several three-body parameters.
Introduction
Over the last twenty years, the two-component Fermi gas with zero-range interactions in three dimensions has become one of the most extensively studied quantum many-body problems.One considers particles with two internal states (denoted ↑ and ↓) and an interaction of vanishing range characterized by its s-wave scattering length a 2 .When 1/a 2 changes from −∞ to +∞, the interaction changes from weakly to strongly attractive, leading to the BCS to BEC crossover.The strongly correlated regime is reached in the central region of the crossover, around the unitary limit 1/a 2 = 0.While the model was historically introduced as a theoretical abstraction [1][2][3], it accurately describes ultracold gases of fermionic atoms in two hyperfine states near a Feshbach resonance, which are the subject of numerous experimental studies, see e.g. .
For bosons in 3D with resonant interactions, in addition to the two-body contact C 2 , measured in [81,82], a three-body contact C 3 appears in several exact relations [46,52,72,83,84].This appearance of C 3 is linked to the Efimov effect.In particular, a three-body parameter has to be included in the definition of the zero-range model, and C 3 is proportional to the derivative of the energy w.r.t. the three-body parameter.C 3 was measured interferometrically in [82] after an interaction quench to unitarity.A three-body contact also appears for single-component fermions with higher-partial-wave resonant interactions, both in 2D [85] (where a super-Efimov effect occurs) and in 1D [86,87].Two-body and three-body contacts were also found to be useful to describe short-distances or large-momenta properties in clusters of helium atoms [88] and in nuclei [89][90][91][92][93][94][95], although the corresponding relations are only approximate because the interaction range is not much smaller than the interparticle distance.
Here we show that a three-body contact C 3 plays an important role for two-component fermions with resonant interactions in 3D, although there is no three-body Efimov effect so that the zero-range model is parameterized by the scattering length a 2 without any three-body parameter [96,97].We work within the zero-range model in Section 2, and we consider models with a small finite interaction range in Section 3. Within the zero-range model, the number of triplets of particles separated by a small distance (Section 2.1), the tail of the center-of-mass momentum distribution of pairs separated by a small distance (Sec.2.4), and the tail of the two-particle momentum distribution (Sec.2.5) are expressed in terms of C 3 , and C 3 is also related to the third order density correlation function (Sec.2.2) and to the behavior of the manybody wavefunction when three particles approach each other (Sec.2.3).When the interaction range b is non-zero but still small compared to the other typical lengthscales, we consider two additional observables, the formation rate of deeply bound dimers by three-body recombination Γ 3 (Sec.3.1), and the three-body contribution to the energy correction induced by the finite interaction range δE 3 (Sec.3.2).We express Γ 3 and δE 3 in terms of C 3 , and of a three-body parameter a 3 (which is small in the zero-range regime).We define a 3 through the asymptotic behavior of the three-body zero-energy scattering state at distances ≫ b and ≪ |a 2 |.
We consider N ↑ fermions of spin ↑ and N ↓ fermions of spin ↓, either confined by a smooth external trapping potential, or in a box with periodic boundary conditions.We consider equal masses for ↑ and ↓ particles, and discuss the unequal-mass case in Appendix D. We consider a stationary state throughout the article, and discuss statistical mixtures and non-stationary states in Appendix E. In parallel with presenting the relations involving the three-body contact, we will recall for comparison the known Tan relations involving the two-body contact.
Relations for the zero-range model
In this Section we work within the zero-range model, where interactions are characterized by a single parameter, the two-body scattering length a 2 .The zero-range model is defined in Eqs.(13,14).The zero-range limit of finite-range models is expected to be universally described by the zero-range model. 1
Number of nearby fermion triplets
If one measures the positions of all particles, the average number of pairs of particles whose separation is smaller than some ϵ is given by where C 2 is the two-body contact [35,36].Similarly, let us consider the number of triplets of fermions separated by small distances.For three particles 1,2,3, let us define the hyperradius where r i j is the distance between particles i and j .If one measures the positions of all particles, the average number of triplets of particles with hyperradius R < ϵ is given by where the prefactor C 3 is what we call the three-body contact, while the exponent s = 1.772724267 . . . is the lowest positive solution different from 1 of The scaling N 3 (ϵ) ∝ ϵ 2s+2 was already obtained in [98,115] (see Section 2.3 for a rederivation).The anomalous exponent 2s + 2 comes from the analytical solution of the unitary three-body problem [96], and is directly linked to a hidden dynamical symmetry and a separability of the three-body problem in hyperspherical coordinates [38,116,117], or in a field theory point of view, to non-relativistic conformal invariance, with s + 5/2 the scaling dimension of a three-fermion operator [118][119][120].
In general there are two different contributions to the three-body contact, coming from ↑↑↓ and ↑↓↓ spin configurations: Denoting by N 2,1 (resp.N 1,2 ) the contributions to N 3 (ϵ) from triplets of particles of spins ↑↑↓ (resp.↑↓↓), we have where C 2,1 (resp.C 1, 2) is what we call the ↑↑↓ (resp.↑↓↓) three-body contact.Clearly, Remarks: • Due to the antibunching effect associated to the Pauli exclusion between fermions with identical spins, the contribution to Eq. ( 1) coming from pairs of particles with identical spins (↑↑ or ↓↓) is negligible in the ϵ → 0 limit (it scales as ϵ 5 ), and N 2 (ϵ) is dominated by the contribution from pairs of particles with opposite spins (↑↓).
Similarly, the contribution to Eq. ( 3) coming from triplets of particles with identical spins (↑↑↑ or ↓↓↓) is negligible in the ϵ → 0 limit (it scales as ϵ 10 ), and N 3 (ϵ) is dominated by the contributions from triplets of particles with non-identical spins, ↑↑↓ or ↑↓↓, in agreement with Eq. ( 7).• For comparison, in the non-interacting case, the number of nearby pairs and triplets scales as (more generally, these scalings also hold with a finite-range interaction that does not diverge too strongly at small distance, so that the wavefunction is bounded).Equation (8) [resp.Equation ( 9)] is dominated by the contribution from pairs (resp.triplets) of particles with non-identical spins.Equation ( 9) includes the antibunching effect due to the Pauli exclusion between the two identical-spin fermions: The wavefunction vanishes linearly with the distance between these fermions, hence an ϵ 2 suppression factor compared to the completely uncorrelated case of non-interacting distinguishable particles • Compared to the non-interacting case Eqs.(8,9), the exponents in Eqs.(1,3) are reduced, i.e. the probability to find particles near to each other is enhanced.This bunching effect is due to the attractive effect of the resonant zero-range interaction, which causes the wavefunction to diverge: When the distance r between two opposite-spin particles vanishes, ψ ∝ 1/r , which yields Eq. (1), while in the limit of vanishing hyperradius R between three particles, ψ ∝ R s−2 , which yields Eq. ( 3).Note that since s < 2, the wavefunction indeed diverges for R → 0, and the exponent in Eq. ( 3) is smaller than in the uncorrelated case Eq. (10), which means that the bunching effect due to the zerorange interactions overcompensates the antibunching effect due to Pauli exclusion. 2
Link with the many-body wavefunction
When three particles approach each other, the many-body wavefunction has a singular asymptotic behavior, with a prefactor related to three-body contact.Let ψ(r 1 , . . ., r N ) be the (orbital) many-body wavefunction.Without loss of generality, we can assume N ↑ ≥ 2, and consider that particles (1, 2, 3) have spins (↑, ↑, ↓).We take this convention throughout the article.This means that ψ is antisymmetric w.r.t.exchange of r 1 and r 2 (ψ is also antisymmetric w.r.t.exchange of any other pair of same-spin particles).We take the normalization d3 r 1 . . .
Within the zero-range model, the stationary Schrödinger equation contains an external trapping potential U but no interaction potential; instead, ψ should satisfy a contact condition in the limit where two particles of opposite spin approach each other: There exists A such that where a 2 is the two-body scattering length, r = ∥r 1 − r 3 ∥ is the distance between the oppositespin particles 1 and 3, and c = (r 1 + r 3 )/2 is their center-of-mass.The limit r → 0 is taken for fixed c and fixed positions of the remaining particles (r 2 , r 4 , . . ., r N ).By antisymmetry, a similar contact condition automatically also holds for all other pairs of opposite-spin particles, and Eqs.(13,14) are sufficient to define the eigenstates ψ and energies E of the zero-range model. 3 When particles 1, 2 and 3 approach each other, the wavefunction of any stationary state has the asymptotic behavior [38,96,98,115] Here, as in Eq. ( 11), R is the hyperradius of particles (1, 2, 3) defined in Eq. ( 2), C is their centerof-mass, and Ω denotes their hyperangles.The limit R → 0 is taken for fixed Ω, C, r 4 , . . ., r N .The unitary hyperangular wavefunctions φ m (Ω) are such that R s−2 φ m (Ω) is a solution of the three-body problem at zero energy and infinite scattering length with total angular momentum quantum numbers l = 1 and m ∈ {−1, 0, 1}, see Appendix A for more details. 4,5 It is known [35] that C 2 is given by the norm of the function A that appears in Eq. ( 14), Similarly, the three-body contact is given by the norm of the function B that appears in Eq. ( 15), The expression (5) for the number of nearby ↑↑↓ fermion triplets, together with Eq. ( 17), simply follow from Eq. ( 15) by integrating |ψ| 2 over the R < ϵ region. 6Similarly, the relation involving g 2,1 , Eq. ( 11), follows immediately from Eqs. (15,17) and from the expression of g 2,1 in first quantization, g 2,1 (r 1 , r 2 , There is a completely analogous relation between C 1,2 and the behavior of the many-body wavefunction when three particles of spins ↑↓↓ approach each other (provided N ↓ ≥ 2).Specifically, considering that particle 4 has spin ↓ (while particles 1, 3 still have spins ↑, ↓) and denoting by R, Ω, C the hyperradius, hyperangles and center-of-mass associated to particles 3, 4, 1 [obtained by replacing (r 1 , r 2 , r 3 ) with (r 3 , r 4 , r 1 ) in Eqs.(88,89,90)] we have 3 Configurations with a vanishing interparticle distance are implicitly excluded in (13).In an equivalent alternative formulation, these configurations are included and regularized delta pseudopotential terms are added [97,116]. 4There is a similarity between the two-body and three-body short-distance asymptotic behaviors Eqs. ( 14) and (15), given that 1/r − 1/a 2 is a solution of the two-body problem at zero energy. 5An asymptotic behavior similar to (15) holds when any three particles with spins ↑↑↓ approach each other, the functions B m corresponding to different triplets of particles being simply related to each other by antisymmtery. 6Here we used the change of integration variables (r 1 , r 2 , r 3 ) −→ (r, ρ, C) with ρ defined in (88), of Jacobian 3 .We also used the property which yields the relations for N 1,2 and g 1,2 , Eqs. (6,12), with Here we assumed N ↓ ≥ 2; if N ↓ = 1, then N 1,2 is obviously zero, and C 1,2 = 0.
Remark: Higher-body contacts can be defined in the same way than the three-body contacts.
When j ↑ particles of spin ↑ and j ↓ particles of spin ↓ approach each other, the N -body wavefunction factorizes into the product of (i) a function of the relative positions of the j = j ↑ + j ↓ nearby particles, given by a zero-energy solution of the j ↑ + j ↓ body problem at a 2 =∞, proportional to the j body hyperradius to some power, and (ii) a function of the center-of-mass of the j nearby particles and of the positions of the (N − j ) other particles [38,115].The L 2 norm of the latter function defines the j ↑ + j ↓ body contact C j ↑ , j ↓ (up to a prefactor which is a matter of definition).
Large-momentum tail of the center-of-mass momentum distribution of nearby pairs
Since C 2 and C 3 determine short-distance singularities, it is natural that they also determine large-momentum tails.C 2 determines the leading tail of the single-particle momentum distribution [36] N with the normalization N σ (k) d 3 k/(2π) 3 = N σ (in the case of periodic boundary conditions, momenta become discrete and momentum integrals should be replaced by sums).C 3 also determines a large-momentum tail.Suppose that one measures, for a pair of particles with opposite spin, both their spatial separation r and their center-of-mass momentum K (this is allowed since the corresponding operators commute).Let N 2 (ϵ, K) be the probability distribution over K conditional to r < ϵ, with the normalization N 2 (ϵ, K) d 3 K /(2π) 3 = N 2 (ϵ).In other words, N 2 (ϵ, K) is the center-of-mass momentum distribution of the pairs of particles separated by a distance < ϵ, normalized to the total number of such pairs. 7We have where N P (K) is what we call the center-of-mass momentum distribution of nearby fermion pairs.With this definition, one simply has the normalization as a consequence of (1). 8The tail of N P (K) is determined by the three-body contact 9 : 7 More formally, N 2 (ϵ, K) is the expectation value of the operator i :↑, j :↓ θ(ϵ − ri j ) (2π) 3 δ 3 ( Ki j − K), where θ is the Heaviside function, while ri j = ∥r j − ri ∥ and Ki j = ki + kj are the operators corresponding to the relative distance and the center-of-mass momentum of particles i and j . 8N P (K) appears naturally in the diagrammatic formalism: In a homogeneous system, N P (K) divided by the volume equals −(m 2 /ħ 2 ) Γ(K, τ = 0 − ) where Γ is the pair-propagator defined e.g. in [66].This can be shown using a lattice model [56], for which 1/(r r ′ ) in ( 27) can be replaced by φ(r)φ(r ′ ) with φ the zero-energy two-body scattering state, and setting r = r ′ = 0. 9 The fact that Γ(K , τ = 0 − ) has a tail ∝ C 3 /K 2s+4 was pointed out to us by Shina Tan (private communication, Aspen, 2011).
with the prefactor whose numerical value is Here NP (K ) stands for the angular average N P (K) d K/(4π) (with K := K/K , and d K the differential solid angle, so that To derive this result, we consider the two-body reduced density matrix Inserting the two-body contact condition ( 14) into (26) yields where We see that g P can be physically interpreted as a coherence function for pairs of nearby fermions.Accordingly, N P is related to g P by Fourier transformation, as can be formally shown using the definition (21) of N P .Hence We then follow a reasoning resembling the one used to derive Eq. (20) in Sec.IV.A of [56].
In the large K limit, the Fourier transform with respect to c in (30) is dominated by the contributions from the singularities of A(c; r 2 , r 4 , . . ., r N ), which occur when c approaches one of the r i (i = 2, 4, . . ., N ).This corresponds to particles 1, 3 and i being close to each other [since A(c; r 2 , . ..) determines the wavefunction ψ when particles 1 and 3 are close to c, according to Eq. ( 14)].For example, for i = 2, the behavior of A(c; r 2 , r 4 , . . ., r N ) in the limit c → r 2 is determined by the asymptotic behavior of ψ when the three particles 1, 2, 3 are close, given by Eq. (15).Therefore, we just need to take the limit where particles 1 and 3 approach each other in (15) to obtain where we used the expression (98,99) of φ m (Ω).There is a similar singularity of A(c; r 2 , r 4 , . . ., r N ) when c approaches r i with 4 ≤ i ≤ N ; when particle i has spin ↓, the function Bm introduced in (18) appears instead of B m .This gives (1 − 2 δ i ,4 ) e −i K•r i Bm (P 4i (r 2 ; r 4 , . . ., r N )) (31) where the first (resp.second) sum over i is taken over particles with spin ↑ (resp.↓), P j i (r 2 ; r 4 , . . ., r N ) is obtained from (r 2 ; r 4 , . . ., r N ) by exchanging r j with r i (P i i is the identity), and I m (K) := s d 3 u e i K•u u s−1 Y m 1 ( û).The latter integral can be evaluated analytically: Using with j 1 (t ) = (sin t −t cos t )/t 2 , and evaluating the remaining integral over u by integrating along a closed contour including the positive real axis and negative imaginary axis, we get Inserting (31,32) into (30), expanding the modulus squared, and neglecting in the large K limit the cross terms coming from two different values of i , 10 we obtain the result (23,24,25), where we used the value of N given in Eq. ( 100) of Appendix A.
Large-momentum tail of the two-particle momentum distribution
The three-body contact also determines the asymptotic behavior at large momenta of the twoparticle opposite-spin momentum distribution function, defined by where Nσ (k) := ĉ † σ (k) ĉσ (k) with ĉσ (k) = d 3 r e −i k•r ψσ (r) the annihilation operator of a particle of spin σ in the state |k〉 defined by 〈r|k〉 = e i k•r .Since Nσ (k) Experimentally, the two-particle momentum distribution can be accessed from the statistics of time-of-flight images, as was recently demonstrated in 2D [122]. 11 Taking the limit of a large relative momentum k → ∞, and integrating over the center-of-mass momentum K, one obtains a tail proportional to C 2 , as pointed out in [128].If we instead send K to infinity, and average over the direction of K, we obtain a tail proportional to C 3 , lim where the constant M was given in (24).This result immediately follows from (23) and from the relation obtained in [128] and rederived in the sequel. 12,13The definition (33) yields 10 By power counting, these cross terms give rise to a ∝ 1/K 2s 4 +4 tail of NP (K ), where s 4 is the smallest scaling exponent of the unitary four-fermion problem; this is indeed negligible compared to the leading 1/K 2s+4 tail of NP (K ), given that s 4 = 2.509(1) is larger than s ≡ s 3 .This value of s 4 follows from the four-body ground-state energy E 4 in an isotropic harmonic trap computed in [121] and the relation [115,117] E N = (s N + 5/2)ħω. 11In 3D, early measurements in the BEC regime were reported in [123]; see also [124] for a recent numerical study.In optical lattices, detailed experimental studies were carried out in recent years using metastable helium atoms [125][126][127].
12 Relation (34) also follows from (36), given (22). 13In Ref. [128], N P (K) was defined by where ρ 2 is the two-body reduced density matrix, which has the diverging behavior (27) when r and r ′ tend to zero.This short-distance divergence leads to a k → ∞ tail of the Fourier transform Eq. ( 37), which can be computed by replacing ρ 2 with its asymptotic expression (27), and using the identity (in the sense of distributions) d 3 r e −i k•r /r = 4π/k 2 .Using (29) then yields (36).
Relations for finite-range models
In this Section, we go beyond the zero-range model, and consider interactions of small but nonzero range.We express two observable in terms of the three-body contact: the rate of three-body recombinations towards deeply bound dimers (Sec.3.1), and the three-body contribution to the energy difference between finite-range and zero-range models (Sec.3.2).
Three-body loss rate
In ultracold atom experiments, three-body losses generically take place, being a manifestation of the fact that the true equilibrium state at such low temperatures is not gaseous (with the exception of polarized hydrogen).In this Section we relate the rate of three-body losses to the three-body contact.
Simple finite-range interaction
To describe three-body losses, we need to go beyond the zero-range model.In this subsection we consider a simple model where fermions of different spin interact through a rotationally invariant potential V 2 (r ), of finite range b. 14 We consider the resonant regime where the twobody scattering length a 2 is large, In this regime, there are two kinds of two-body bound states: • the weakly bound dimer, of binding energy ≈ ħ 2 /(ma 2 2 ), which exists for a 2 > 0 • deeply bound dimer(s), of binding energy ≳ħ 2 /(mb 2 ), which exist if the interaction potential is deep enough (as in generic cold atom experiments).
We consider a stationary solution of the N -body Schrödinger equation in the zero-range regime 1/k typ ≫ b (41) and ( 36) was deduced from the expression which has the asymptotic behavior where 1/k typ is defined as the smallest scale of variation of the stationary wavefunction ψ in the region where all interparticle distances are ≫ b. 15 Let us first consider the case where there are no deeply bound states.For simplicity we also assume that the spectrum is discrete (which is the case in a trapping potential of infinite depth -e.g. a harmonic trap-or in a box with periodic boundary conditions).Then, in the zerorange regime (41), the zero-range model is valid for any stationary state ψ, in the sense that standard observables tend to their respective values within the zero-range model.This includes observables such as the energy, as well as N 2 (ϵ) and N 3 (ϵ) provided ϵ ≫ b [to reach the asymptotic regimes of Eqs.(1,3) one also needs ϵ ≪ 1/k typ ].
We turn to the experimentally relevant case where deeply bound dimers exist.These deeply bound dimers can be formed through recombination processes between three atoms.Let us denote by Γ 3 the number of such events per unit of time.The recombination products (the deeply bound dimer and the third atom) escape from the trapped gas, provided the trapping potential U (r) has a finite depth much smaller than the binding energy (∼ ħ 2 mb 2 ) of deeply bound dimers.In typical experiments, this condition holds, and other loss processes are negligible, which allows one to measure Γ 3 from the decay of the number of trapped atoms, Ṅ = −3 Γ 3 [11,[129][130][131][132].
In the zero-range regime, this decay is slow compared to the other timescales of the problem, as we will see.A standard way to describe such a slowly decaying state in quantum mechanics is to consider a quasi-stationary Gamow state, i.e., a solution of the Schrödinger equation with a complex energy and an outgoing-wave asymptotic behavior [133][134][135][136][137][138].Accordingly, we will consider a solution ψ of (39,40) with a complex E , and an outgoing-wave asymptotic behavior corresponding to the recombination products (a deeply bound dimer + an atom) flying apart towards large distances. 16For such a quasi-stationary state, in the zero-range regime, standard observables again tend to their respective values within the zero-range model. 17,18The threebody loss rate Γ 3 , however, is simply zero within the zero-range model.To compute Γ 3 , one thus needs to go beyond the zero-range model.As we will see, one only needs to do so for the threebody problem, in order to define a three-body parameter a 3 .We then find where C 3 can be evaluated within the zero-range model.Furthermore, breaking up Γ 3 into the sum of the two contributions Γ 2,1 and Γ 1,2 corresponding to ↑↑↓ and ↑↓↓ loss processes, we have We expect these relations to be asymptotically exact in the resonant zero-range regime (38,41). 15For example, for the ground state of the homogeneous unpolarized gas, for a 2 > 0, where k F is the Fermi momentum; for the ground state of a few particles in an isotropic harmonic trap of frequency ω, 1/k typ is ∼ a ho for a 2 < 0 and ∼ Min(a ho , a 2 ) for a 2 > 0, where a ho := ħ/(mω) is the harmonic oscillator length. 16An alternative approach would be to use the Lindblad equation.We expect that this would lead to the same result for the loss rate, as was checked for three-body losses for bosons in [139]. 17An appropriate normalization of the Gamow state will be given below in Eq. ( 58).Similarly, the expectation value of an observable Ô should be defined as 18 Within the zero-range model, it is convenient to add steep infinite walls to the trapping potential at the boundary of R trap , in order to have truly stationary states, thereby neglecting the exponentially suppressed evaporation process discussed in footnote 23.
To define the three-body parameter a 3 , we consider the zero-energy free-space solution of the Schrödinger equation (39,40) for three particles of spins ↑↑↓ and angular-momentum quantum numbers (l = 1, m) whose asymptotic behavior has the form in the region {b ≪ r i j ≪ |a 2 |, ∀i < j } where all interparticle distances are large compared to the range but small compared to the two-body scattering length.Here we have neglected the deep-dimer + atom outgoing wave, since it is proportional to the dimer wavefunction which is exponentially suppressed at distances ≫ b.The fact that a 3 does not depend on the quantum number m follows from rotational invariance of the interaction. Remarks: • Relation ( 42) is reminiscent of the known relation [44,139] between two-body loss rate and two-body contact 19,20 • Im a 3 must be negative, since the loss rate is positive.
• Typically one has the order of magnitude estimate a 3 ∼ b 2s , assuming that there is no extra fine-tuning. 21Therefore Γ 3 ∝ b 2s is small in the zero-range regime, as anticipated.• Based on heuristic arguments, it was already stated in [98] (see also [130,140,149]) that in the zero-range regime, the formation rate of deeply bound dimers is proportional to the probability of finding three particles at distances ≲ b, times ħ/(m b 2 ), that is, , with N 3 (b) evaluated within the zero-range model, and K a dimensionless prefactor that depends on short-range three-body physics.This statement is equivalent to (42), given the relation (3), with K = −8 s (s+1) Im a 3 /b 2s .The novelties of the present work are (i) to provide a derivation of the relation (42), and hence of the above statement from [98], (ii) to introduce the natural single parameter a 3 (instead of the two parameters K and b), and (iii) to generalize the relation to more complex interactions (in the following Section 3.1.2). • Let us consider the case of the homogeneous unpolarized zero-temperature unitary gas, of number density n.Introducing the three-body contact density C 3 := C 3 /V with V 19 Relation (46) concerns the situation where the states ↑ and ↓ populated in the gas are not the two energetically lowest atomic internal states, so that inelastic two-body collisions towards lower lying states are energetically allowed, and a 2 acquires an imaginary part. 20Relation (46) was obtained in [44] using the relation (84) and Im E = −ħ Γ 2 /2.It was rederived in [139] using the Lindblad equation.For completeness, we note that the relation (46) can also be derived by a flux computation.The main steps of this derivation are as follows.We consider a solution ψ of the zero-range model (13,14) with complex values of 1/a 2 and E .The corresponding time-dependent wavefunction is Ψ(t ) = ψ e −i E t /ħ , and we have Using (55,56), we obtain that Γ 2 equals N ↑ N ↓ times the limit when ϵ → 0 of the probability flux entering into the region {(r 1 , . . ., r N )|r < ϵ}, which can be simplified to (14) then yields (46). 21Indeed, the behavior (45) has to be matched at R of order b with the solution inside the potential, which typically imposes that the two terms in (45) are of the same order of magnitude for R ∼ b [97,140].For simplicity, we excluded here the special regime |a 3 | ≫ b 2s that corresponds to the vicinity of a three-body resonance (see [141,142], and [38,117,[143][144][145][146][147][148] for the mass-imbalanced case with 0 ≤ s < 1).Reaching this regime would require a second fine-tuning of the interaction, in addition to the first fine-tuning that causes |a 2 | ≫ b (for cold atoms, it would require a second control parameter of the interaction, in addition to the magnetic field used to tune a 2 to large values).We expect the relations (42,43,44) and the other results of this article to remain applicable in the three-body resonant regime provided the threebody parameter(s) remain(s) small (in modulus) compared to 1/k 2s typ .
the volume, dimensional analysis gives Therefore, as already found in [98], the timescale of three-body losses τ 3 := n/| ṅ| is of order τ F /(k F b) 2s , much larger than the thermalization time τ F ∼ m/(ħ k 2 F ), so that the gas remains at quasi-equilibrium.
• For bosons (and more generally in presence of the Efimov effect) it is commonly accepted that three-body losses can be described by making the three-body parameter complex [150][151][152].We have transposed this to the fermionic case (where the Efimov effect does not occur) by introducing a complex three-body parameter a 3 .The expression (42) of Γ 3 is reminiscent of the relation for bosons expressing Γ 3 in terms of C 3 and the inelasticity parameter (i.e. the phase of the three-body parameter) [72,139].An important difference is that in the fermionic case, in the zero-range regime, the three-body parameter is small, and can be set to zero when evaluating typical observables other than Γ 3 .• The notion of three-body parameter a 3 differs from the three-body scattering hypervolume D which was defined for bosons in 3D [153,154] and for various other cases [155][156][157][158]. Presumably, D could be defined also for the present case (two-component fermions in 3D), and as in [153][154][155][156][157][158] We turn to the derivation of (42,43,44).Our reasoning is similar to the bosonic case treated in App.B of [72], but the present case is significantly more complicated because we cannot work directly within the zero-range model.For R ≫ b, the a 3 /R s term in (45) is negligible compared to the R s term, consistently with the behavior (15) within the zero-range model.However this a 3 /R s term cannot be neglected to describe three-body losses, since the losses come from the non-zero imaginary part of a 3 , as we will see.Accordingly, for the Gamow state ψ, we go beyond the zero-range model and replace (15) with which we expect to hold provided where and whose appropriate choice will be discussed more precisely below.The purpose of ( 51) is to ensure that the interparticle distances within the triplet of particles 1, 2, 3 are much smaller than all other interparticle distances.
We use the shorthand notation X = (r 1 , . . ., r N ).The solution Ψ(X; t ) of the time-dependent Schrödinger equation [i ħ Ψ = H Ψ, with H given by ( 40)] associated with the Gamow state ψ It satisfies the continuity equation in terms of the probability current We express the loss event rate (i.e. the probability for a loss event to occur per unit of time) as where the region R should physically correspond to N atoms in the trap, and where There is some freedom in how to define R. Let R trap denote the "trapping region" of the potential U (r), which can be defined as the set of points r such that the classical trajectory of a particle with initial position r and zero initial velocity remains bounded. 22A possible definition of R would be R = (R trap ) N , but for later convenience, we choose R to be slightly smaller, by excluding configurations with two or three nearby particles: where In other words, (r 1 , . . ., r N ) ∈ R means that • all particle positions r i are inside the trapping region R trap • all distances between pairs of particles with opposite spin are ≥ d 2 • all hyperradii of triplets of particles with non-identical spins are ≥ d 3 .
We take Conditions (53,62) ensure that both d 2 and d 3 are ≪ 1/k typ .Hence, to leading order, the normalization integral (58) is independent of d 2 and d 3 , because R 2 and R 3 are negligibly small subsets of (R trap ) N .Furthermore, to leading order in the zero-range regime, • there exists a stationary state of the zero-range model whose wavefunction (normalized by the usual integral over the entire space) is close to the Gamow state ψ(X) for X ∈ R • in (48), B m (C; r 4 , . . .r N ) can be evaluated within the zero-range model, provided C, r 4 , . . .r N all belong to R trap ; indeed, the ∝ a 3 term is a small correction, and the condition (48) for the Gamow state must match with the condition (15) for the zerorange model.
Equations (54,57,58) directly yield which we will use for a consistency check in Sec.3.2.Here we will evaluate the loss rate by a flux computation.We use the notation for the probability flux through a surface S .In (57), we interchange the time derivative and the integration, and use the continuity equation ( 55) and the divergence theorem, which yields where ∂R is the boundary of R.
Here and in what follows, the differential surface vector d 3N−1 S appearing in ( 64) is oriented towards the exterior of R. Assuming that the trap depth is large enough for evaporation to be negligible 23 , we have where A visual representation is shown in Figure 1.On physical grounds, we identify Φ(S 2 ) as the twobody loss rate Γ 2 , and Φ(S 3 ) as the three-body loss rate Γ 3 , which determine the average number of lost atoms per unit of time: In the considered regime of small range and large trap depth, we expect that Γ 2 is given by the relation (46), and that in the considered case where a 2 is real, Γ 2 is negligible compared to Γ 3 so that Γ ≃ Γ 3 . 24 It remains to determine Γ 3 = Φ(S 3 ).We will use the notation From (60), we have S 3 = i < j :↑,k:↓ or i < j :↓,k:↑ the contribution from ↑↓↓ triplets. 23Evaporation is the process of an individual atom (or a weakly bound dimer for small positive a 2 ) escaping directly from the trapping region (because its energy is large due to a rare fluctuation, and/or it tunnels through the trappingpotential barrier).This process is exponentially suppressed in the limit where the trap depth is large compared to the typical energy per atom (subtracting the dimer binding energy contribution for a 2 > 0).For evaporation of individual atoms, the trap depth can be defined as , the evaporation rate is Φ(S trap ) with , and this rate is exponentially suppressed because J(X) with X ∈ S trap is exponentially suppressed. 24Indeed, in the zero-range limit, the reasoning of footnote 20 yields the expression (46) for Φ(S 2 ) = Γ 2 ; moreover, in the case where a 2 ∈ R, we expect Γ 2 ≪ Γ 3 , because there is no mechanism that would generate a non-negligible flux exiting R through S 2 (hence entering into R 2 ) and propagating in R 2 with an initial wavevector high enough to climb the trapping potential barrier and escape from (R trap ) N .r ρ Figure 1.Geometric illustration of the three-body loss process.The total decay rate is given by the probability flux exiting from the region R (grey shaded area).Neglecting the flux through S 2 (red straight lines) which corresponds to two-body losses, and the flux through S trap (dashed red lines) which corresponds to evaporation, the dominant contribution is the flux through S 3 (green circular arcs) which corresponds to three-body losses.The blue arrows represent the deep-dimer + atom outgoing wave, corresponding to a deeply bound dimer and an atom flying apart with a large relative momentum and escaping from the trap (this wave propagates in the region R 2 ).For the purpose of making this illustrative drawing two-dimensional, we considered N = 3 particles in one space dimension, and fixed the center-of-mass coordinate to C = 0.The positions of the three particles are then determined by the Jacobi coordinates r and ρ.The trapping region was simply assumed to be a symmetric interval around the origin.
By antisymmetry, each ↑↑↓ triplet gives the same contribution, so that (123) .(67) This can be rewritten as25 We then simplify this expression in two steps: (i) replace ψ with its asymptotic behavior (48), with B m evaluated within the zero-range model (ii) replace the integration domain S (123) by the entire region ∂B 123 .
Step (i) is justified since the conditions (49,50,51) hold except in a negligibly small domain of the integration variables C, r 4 , . . ., r N , Ω.
Step (ii) is justified because given (62), the condition X ∉ R 2 only excludes a negligibly small region of hyperangles Ω.We note that the order of these two steps is important. 26Using (17) then yields the result (43).The expression (44) of Γ 1,2 is derived analogously.
Let us now discuss in more detail the appropriate choice of the length d 3 .The condition (52) is actually not sufficient in order to have the behavior (48) of ψ.Indeed, we expect [based on the small-R expansion of the finite-energy solution J s (kR) of the hyperradial Schrödinger equation (93)] that in addition to the term R s in (48), there is a higher-order correction term of order R s+2 k 2 typ , which is negligible compared to a 3 /R s provided we take The condition d 3 ≪ 1/k typ is then automatically satisfied.Compatibility with (52) then requires (k typ b) 1/(s+1) ≪ 1, which is a quite stringent condition given the smallness of the exponent 1/(s + 1) ≃ 0.36.However, while this condition is necessary for (48), we do not expect it to be necessary for the final expression (42) of the three-body loss rate.
To complete our discussion of validity conditions, we now consider the contribution of the angular-momentum sector l = 0. Another condition to fulfill in order for (48) to be valid is that one can neglect the contribution coming from the l = 0 sector of the unitary three-body problem.Let us denote by s ′ := s l =0,n=0 = 2.166221977 . . . the smallest solution different from 2 of s ′ cos(s ′ π/2) + 4 sin(s ′ π/6)/ 3 = 0, and by φ ′ (Ω) := φ (l =0,n=0) (Ω) the corresponding l =0 hyperangular wavefunction.There is a higher-order correction to the r.h.s. of (48) given by R s ′ −2 φ ′ (Ω) times a function B ′ (C; r 4 , . . ., r N ).Requiring this ∝R s ′ correction to be negligible compared to the ∝a 3 /R s term in (48) would yield an additional condition on d 3 , but this is not necessary for the final result (42).Instead, what is truly necessary for (42) is that the contribution from the l =0 ing a minor error in an intermediate step in [72]).We need to evaluate We then perform the change of variables X −→ (C, R) , and rewrite the integrand as 26 If we would start by replacing S (123) with ∂B 123 ∩ (R trap ) N (keeping the original Gamow-state wavefunction ψ), then we would get a vanishing result for the total flux : The flux through ∂B 123 ∩ (R trap ) N \ R 2 (corresponding to the three-atom wave partially reflected from the small-R region) would be compensated by the flux through the complementary surface R 2 ∩ ∂B 123 ∩ (R trap ) N where the deep-dimer + atom outgoing-wave contribution to ψ gives the main contribution to the flux.This follows from the fact that for the three-body zero-energy scattering state Ψ m , the total flux ) vanishes (a related discussion can be found in [154]).
General interactions
In cold atom experiments, interactions are more complex than the simple model of Sec.3.1.1.Not only two-body interactions, but also three-body interaction are present.Moreover, • interactions are not necessarily rotationally invariant around any axis, due to the presence of the external magnetic field • interactions are not necessarily symmetric w.r.t.exchanging the role of ↑ and ↓.
As a minimal model including these features, we consider a two-body interaction potential V 2 (r) which may now depend on the direction of r, and a three-body interaction potential V 2,1 (resp.V 1,2 ) between triplets of particles of spins ↑↑↓ (resp.↑↓↓).The corresponding stationary N -body Schrödinger equation is Here R i j k denotes the Jacobi coordinates associated to particles i , j , k [defined by replacing the indices 1,2,3 by i , j , k in (88,89)].
The two-body interaction V 2 is still assumed to be resonant, |a We find that the ↑↑↓ and ↑↓↓ three-body loss-rates are given by Here, the m-resolved three-body contacts C (m) 2,1 and C (m) 1,2 are defined by where B m is related to the many-body wavefunction (in the zero-range limit) through Eq. ( 15), and similarly where Bm is defined in Eq. ( 18).Note that from Eqs. (17,19) we have 1 m=−1 C (m) 2,1 = C 2,1 and 1 m=−1 C (m) 1,2 = C 1,2 .The three-body parameters a (m) 2,1 are defined as follows.Setting the solution Ψ m of the zero-energy Schrödinger equation in free space for two ↑ and one ↓ particles with angular-momentum quantum numbers (l = 1, m) has the asymptotic behavior in the region where all interparticle distances are ≫ b and ≪ |a 2 |.The three-body parameters a (m) 1,2 are defined similarly in terms of the ↑↓↓ three-body scattering states.The relation ( 71) is derived by considering the probability current, in a completely analogous way to Eqs. (57,65,67,68) above, using the asymptotic behavior of the many-body wavefunction in the region (49,50,51) which is now given by The expression (72) of Γ 1,2 is derived analogously.
Discussion:
The parameters Im a (m) 2,1 and Im a (m) 1,2 are a priori unknown. 27In principle, one may hope to compute them theoretically by solving a sufficiently realistic three-body problem, but this is a difficult task [163][164][165]. 28Instead, one could determine them by measuring the three-body loss rate in situations where the three-body contacts C (m) 2,1 and C (m) 1,2 are known theoretically.A first possibility is to work with a small number N of particles in a microtrap [166][167][168][169] where the N -body wavefunction can be computed numerically with good accuracy [106,170,171] so that the three-body contacts C (m) i , j could be calculated.Measuring Γ 3 in six different states and inverting (71,72) would allow to determine the six parameters Im a (m) i , j .The case of three particles in an isotropic harmonic trap is particularly simple: The problem is analytically solvable [115,172], and if one prepares one of the six degenerate ground states corresponding to a given quantum number m ∈ {−1; 0; 1} and is the only non-zero three-body contact, so that Γ 3 is simply proportional to Im a (m) N ↑ ,N ↓ .Explicitly, taking 27 One may expect a small relative difference between a (0) i , j and a (±1) i , j , and an even smaller one between a (1) i , j and a (−1) i , j , similarly to the m-dependence of the two-body p-wave scattering volume not too close to a p-wave Feshbach resonance [160][161][162]. 28For such a computation of the three-body parameters, one may need to take into account that an atom has more than two relevant internal states |ν〉.However, we expect the general relations (71,72) to remain applicable.Indeed, we expect that in typical experiments, the atoms mainly occupy two internal states, that we can label ν = ↑ and ↓, and if all interatomic distances r i j ≫ b, the many-body wavefunction Φ(r 1 , ν 1 ; . . .; r N , ν N ) is non-negligible only if all ν i belong to {↑; ↓}, in which case Φ is given to good accuracy by antisymmetrizing the wavefunction ψ(r 1 , . . ., r N ) of the zero-range model.
for example N ↑ = 2 and N ↓ = 1, we get 2,1 (mω/ħ) s × 1.266322 (in agreement with the scaling given in [172]). 29 A second possibility is to work with a homogeneous (or locally homogeneous) unpolarized gas at equilibrium, for which the six three-body contacts C (m) i , j are all equal, as shown in Appendix B. This gives with One could then determine Im ā3 by measuring Γ 3 in a weakly correlated regime, where the asymptotic behavior of the three-body contact density C 3 can be computed exactly.A first option is the non-degenerate regime, where we have computed C 3 for negative or infinite scattering length. 30Other options are the weakly interacting regimes where k typ |a 2 | is small. 31
Three-body contribution to the finite-range correction to the energy
In this Section, we study the corrections to the zero-range model's energy coming from the finite range of the two-body interaction and/or an additional three-body interaction.The stationary Nbody Schrödinger equation is again given by (70).We consider the case where there are no deeply bound dimers, so that the three-body parameters are real.The zero-range model is approached in the zero-range regime where b In particular, each eigenergy E of ( 70) approaches a corresponding eigenenergy E ZRM of the zero-range model.We are interested in the energy difference between the finite-range and zero-range models, We find that in the zero-range regime, δE ≃ δE 2 + δE 3 (80) plus higher-order corrections, where δE 2 is given to leading order by [56] with r e the effective range of the two-body interaction V 2 , while δE 3 is given to leading order by 29 For the excited state whose energy is 2q ħω above the ground state energy, the value of C (m) 2,1 is multiplied by q+s s = Γ(q+s+1) Γ(s+1) q! (which is a growing function of q, meaning that the growing delocalization in the trap is overcompensated by the growing penetration under the s 2 /R 2 barrier). 30For the homogeneous unpolarized unitary gas of density n, we obtain in the nondegenerate limit [X.Leyronas and F. Werner, "Three-body contact for fermions.II.Non-degenerate limit", to be submitted]. 31Although the three-body parameters depend on magnetic field, this dependence is smooth if no three-body resonance is crossed, and may be neglected in the vicinity of a given Feshbach resonance.
in terms of the three-body contacts and three-body parameters.If the two-body and threebody interaction potentials are rotationally invariant and V 2,1 = V 1,2 , then the six three-body parameters are all equal to a single a 3 and the expression simplifies to Remarks: • δE 2 (resp.δE 3 ) comes from configurations where 2 (resp.3) particles are close to each other.• The relations (82,83) are reminiscent of the relation [36] which holds within the zero-range model. 32 • Typically, a 3 is of order b 2s , which gives δE 3 ∼ b 2s .The latter scaling was already stated (for lattice models) in [173,174] and was derived at the level of the third virial coefficient in [175].the three-body correction to the energy is of higher order than the two-body correction. 33,34 • Let us assume that ( 83) can be analytically continued to complex a 3 , with C 3 still evaluated within the zero-range model.We then recover the expression (42) of the three-body loss rate, simply by substituting Im E = Im δE 3 into (63), and using the fact that Γ = Γ 3 and C 3 ∈ R. Similarly, applying this procedure to (82) yields Γ 3 in agreement with the sum of Eqs.(71,72).
We have ∆ 0 = 0, assuming for simplicity that the interaction potentials (V 2 , V 2,1 and V 1,2 ) are 32 A relation which is more directly analogous to (84) can be formulated in the mass-imbalanced case, see (128). 33In the mass-imbalanced case discussed in App.D, the situation is reversed beyond a critical mass ratio, as was already noted in [175]. 34Apart from the leading-order term (81) which is of order b, there are also higher-order contributions to δE 2 , which we expect to contain no term of order b 2s (since the three-body physics does not enter in δE 2 ) but only integer powers of b.Accordingly, the ∝ b 2s contribution to δE should be entirely given by ( 82) or (83).
finite everywhere (i.e.excluding hard-wall potentials) and do not diverge too quickly at short distances. 35Hence Since the potentials are short-ranged, the integral is dominated by the contributions from three regions, outside of which V becomes negligible: • two particles of spins ↑↓ are nearby (at distance ≲ b 2 ) while all other interparticle distances are ≫ b • there is a triplet of particles of spins ↑↑↓ which are nearby (their hyperradius is ≲ b) while all interparticle distances other than the ones within that triplet are ≫ b • there is a triplet of particles of spins ↑↓↓ which are nearby (their hyperradius is ≲ b) while all interparticle distances other than the ones within that triplet are ≫ b.
Denoting the contributions of these regions by ∆ 2 , ∆ 2,1 and ∆ 1,2 respectively, we thus have The two-nearby-particle contribution ∆ 2 is given by the r.h.s. of ( 81), as shown in Appendix C, in agreement with [56].It remains to evaluate the contribution ∆ 2,1 coming from three nearby particles of spins ↑↑↓.We introduce a length d 3 satisfying (52,53,69).∆ 2,1 is then given by the contribution to the integral (85) coming from the region of (r 1 , . . ., r N ) such that there is a triplet of particles (i , j , k) of spins (↑, ↑, ↓) of hyperradius R i j k < d 3 while all interparticle distances other than the ones within the triplet (i , j , k) are ≫ d 3 (the result does not depend on the value of d 3 within the range (52,53), as we will see).In the region where these conditions are met for the triplet (1, 2, 3), i.e. when (49,51) hold, we expect a factorization as well as [cf.(15)] with Ψ (0) m (R) := R s−2 φ m (Ω).Furthermore, we can approximate B (1) m by B m in (86), given that we are in the zero-range regime and all distances between the points C, r 4 , . . ., r N are ≫ b.Also using the fact that each triplet gives the same contribution by fermionic antisymmetry, we get where W m,m ′ := and V (R) was defined in (75).To evaluate W m,m ′ , we first use the Schrödinger equation (76) to replace V by ħ 2 m ∆ R .Since we also have and applying the divergence theorem yields We can then use the asymptotic behavior (77) of Ψ m (indeed, for R = d 3 ≫ b, all three interparticle distance are ≫ b except in a small region of hyperangles).This gives W m,m ′ = ħ 2 m 2 s δ m,m ′ a (m) 2,1 .Substituting this into (87) and using (17 The same reasoning gives an analogous expression for ∆ 1,2 , which yields the final expression (82) for
Summary and outlook
We have shown that the three-body contact C 3 is a useful concept for the fermionic N -body problem with resonant interactions, in the standard regime of mass ratio where there is no Efimov effect.Within the zero-range model, the three-body contact controls the number of nearby triplets, the third order density correlation function at short distances, the tail of the centerof-mass momentum distribution of nearby pairs, and the tail of the two-particle momentum distribution; C 3 also has a simple expression in terms of the N -body wavefunction in the limit where three particles are nearby.Beyond the zero-range model, for a small finite interaction range, we introduced a small three-body parameter a 3 , and we showed that the formation rate Γ 3 of deeply bound dimers by three-body recombination equals C 3 Im a 3 times an explicit prefactor; we also showed that the finite-range correction to the energy has a contribution equal to − ħ 2 C 3 a 3 times the same prefactor.With respect to the relation between Γ 3 and the number of nearby triplets stated in [98], the present work adds a derivation, and an expression of the prefactor in terms of Im a 3 .
Furthermore, we considered the general case where there are two different contributions C 2,1 and C 1,2 to the three-body contact, corresponding to the spin configurations ↑↑↓ and ↑↓↓ for the associated three-body problem, which can be further broken up into the contributions C (m) 2,1 and are the ones for finite-range interactions, in the general case where interactions are not invariant under rotation and under exchange between ↑ and ↓.In this case, the three-body parameter also depends on the spin and angular momentum indices.Nevertheless, for a homogeneous unpolarized gas, Γ 3 simply equals C 3 Im ā3 times an explicit prefactor, with ā3 the mean three-body parameter.For the unitary gas in the non-degenerate regime, we announced the result of our computation of C 3 (see footnote 21), which would allow one to determine Im ā3 by measuring Γ 3 .Measuring Γ 3 in the low-temperature regime would then allow one to experimentally test the power-law C 3 = ζ 3 n (2s+5)/3 and determine the associated many-body parameter ζ 3 whose computation is an open theoretical challenge.zero center-of-mass momentum, ψ only depends on the relative positions between the three particles, which can be parameterized by the Jacobi coordinates It is convenient to introduce the six-dimensional vector whose norm R = ∥R∥ = r 2 + ρ 2 is the hyperradius, while its direction R/R can be parameterized by five hyperangles denoted collectively by Ω, The three-body Schrödinger equation then writes with the contact condition: there exists A such that (the contact condition between particles 2 and 3 then automatically holds by antisymmetry).
At the unitary limit a 2 = ∞, this contact condition scale invariant.Accordingly, it only acts on the hyperangles; explicitly, it can be expressed as ∂ ∂α sin α ψ α=0 = 0 where α = arctan(r /ρ).As a result, the unitary three-body problem is separable in hyperspherical coordinates: One can look for eigenstates of the factorized form (Ω) (where the factor 1/R 2 is introduced for later convenience), and the three-body problem (91,92) separates into • a hyperangular problem, defined by together with the contact condition and the antisymmetry constraint.
Here T Ω is a differential operator acting on the hyperangles, defined by There is a discrete spectrum of values for s 2 , which are all real and positive in the present case of equal-mass fermions, so that we can take s real and positive; we denote the set of allowed s by {s ν } where ν is a discrete index.The corresponding hyperangular eigenfunctions {φ ν (Ω)} form an orthonormal basis for the hyperangular scalar product where d 5 Ω denotes the differential solid angle in six-dimensional space, d 6 R = d 5 Ω R 5 d R ; this can be deduced from the self-adjointness of the unitary three-body problem in an isotropic harmonic trap and can also be checked by explicit analytical calculations in the l = 0 subspace [178] (for a mathematical of self-adjointness in free space, see Refs.[179][180][181]).
The hyperangular problem (94,95) is analytically solvable, with two types of solutions.The first type are common solutions of the unitary and non-interacting hyperangular problems, whose wavefunction φ(Ω) vanishes when two particles approach each other (i.e. for α → 0); the corresponding s take integer values.The second type are the following truly interacting solutions: • For each l , let us denote by {s l ,n ; n ∈ N} the allowed values of s, in increasing order (s l ,0 < s l ,1 < . . .).The index ν can be identified with the set of quantum numbers (l , m,n).All the eigenfunctions φ (l ,m,n) with −l ≤ m ≤ l correspond to the same s l ,n , so that each s l ,n is 2l + 1 times degenerate.For l = 1, the s l ,n are the real positive solutions different from 1 of the transcendental equation ( 4) [the solution s = 1 should be discarded since it leads to an identically vanishing ϕ(α)].For arbitrary l , the s l ,n also solve transcendental equations (see, e.g., [38] and refs.therein); the smallest of all s l ,n is s l =1,n=0 , which is denoted by s for short throughout the article.
The value of the normalization constant, for l = 1, is We computed this value as follows.We have N = 3/J (s) with We evaluate the sum over m thanks to the identity: for any unit vectors û and û′ .This yields where α ′ and ρ′ are obtained from α and ρ by permutation of particles 1 and 2. The integrand in (101) only depends on α and α ′ , since ρ . Hence the integrand in ( 101) is a function of α and u.To evaluate the integral we use the formula 36 More explicitly, the considered wavefunction is an eigenstate of L 2 with eigenvalue l (l + 1)ħ 2 , and of L z with eigenvalue mħ, where L is the relative angular momentum of the 3 particles, L = −i ħ(r × ∇ ∇ ∇ r + ρ × ∇ ∇ ∇ ρ ).Note that the total angular momentum of the three particles is the sum of their relative and center-of-mass angular momenta: 37 Like the standard hyperspherical harmonics, the φ ν are eigenstates of the Laplacian on the hypersphere, Eq. ( 94); but they also satisfy the unitary-limit contact condition Eq. ( 95) (together with fermionic antisymmetry) which leads to non-integer eigenvalues s 2 .Félix Werner and Xavier Leyronas π/2 0 dα sin 2 (α) cos 2 (α) dr d ρ f (α, r, ρ).Since the integrand is independent of r and of the azimuthal angle of ρ w.r.t.r, we can integrate over them, which just gives a factor 4π × 2π.We are left with the integral over α and u.The change of variable u α ′ then yields where D is the domain α, α We evaluate analytically the integrals over α and α ′ for the first and second term of (102); for the third term, we evaluate analytically the integral over α ′ , and perform numerically the remaining integration over α.
Appendix B. Homogeneous gas
Let us show that for the homogeneous gas at equilibrium, C (m) 2,1 is independent of m.We will use the following lemma: Let Ô be an operator acting on functions of Ω, such that with θ the Heaviside function.This follows from Eq. ( 15) and the fact that the φ m (Ω) are orthonormal.
Injecting the ansatz (106) into the N -body Schrödinger equation ( 70) yields, in the region R 13 , with up to corrections that are negligible in the zero-range regime, as was checked in [56].We omitted the dependence of E on (c; r 2 , r 4 , . . ., r N ) to alleviate notations.We neglected all interaction potentials other than V 2 (r ), because we are far outside their ranges: In the considered region R 13 , all interparticle distances other than r ≡ r 13 , and hence also all triplet hyperradii, are ≫ b.
For the same reason, we will replace V by V 2 in (105).
For the zero-range model, we also expect a factorization of the many-body wavefunction in R 13 , where χ (0) E (r ) is the s-wave two-body scattering state at energy E for the zero-range model, i.e. the solution of with the contact condition: χ (0) E (r ) = 1/r − 1/a 2 + O(r ) for r → 0.Here we normalized χ (0) E in such a way that the function A in ( 108) is the same one than in (14).The solution is where f (0) k = −(1/a 2 +i k) −1 is the scattering amplitude of the zero-range model, and k := mE /ħ, with the determination k = i −mE /ħ if E < 0. We note that for r → 0, from the Taylor expansion of (110), we get a subleading singular contribution to the asymptotic expansion of ψ (0) given by −k 2 r A/2, in agreement with Eqs.(135,136) of [56].From this we can infer that k is typically ≲ k typ .
Scattering theory gives the large-distance behavior of the finite-range scattering state χ E : where f k is the s-wave scattering amplitude associated to the interaction potential V 2 (r ).
Here we normalized χ E in such a way that χ E (r ) ∼ 1/r for E → 0 and r → ∞, in agreement with the assumption that the same function A appears in (108) and (106).
Next, in (105), we can thus replace V by V 2 and substitute (108,106), which yields where It remains to evaluate W .Using (107,109), we rewrite it as This gives, by the divergence theorem, W = 4πħ 2 m W with W := χ (0) we can directly substitute (110,111) and their derivatives, which yields Since kb 2 ≲ k typ b 2 ≪ 1, we can use the low-energy expansion of the scattering amplitude: e /2 + . . . in the limit kb 2 → 0, where r e is by definition the effective range.This gives W ≃ 2πE r e .Substituting this into (112), we conclude that ∆ 2 is indeed given by the r.h.s. of (81).
Appendix D. Mass-imbalanced case
In this Appendix we consider the case where the ↑ and ↓ fermions have different masses, m ↑ > m ↓ for definiteness.Experimentally, this is realized in a mixture of two fermionic species, such as 40 K-6 Li [182], 161 Dy-40 K [183], or 53 Cr- 6 Li [184].In Section D.1, we extend the relations obtained in the main text to the mass-imbalanced case.In Section D.2 we show that, when the mass ratio exceeds a critical value, a conceptual simplification takes place: A generalized zero-range model can be introduced, within which relations involving the three-body parameters can be formulated and derived more directly.The results of this Appendix are valid if m ↑ /m ↓ is smaller than the threshold where the five-body Efimov effect appears [103], which implies that the fourbody Efimov effect [102] and three-body Efimov effect [176] do not take place either; moreover m ↑ /m ↓ should not be too close to the four-body and five-body Efimov thresholds, as discussed in Section D.3.Obviously, in the N -body Schrödinger equation, the mass m is replaced by the mass m i of particle i (m i = m ↑ or m ↓ depending on the spin of particle i ): for the zero-range model, and for the finite-range model.It will prove convenient to introduce the angles ϕ and φ related to the mass ratio by The definitions of the center-of-mass and Jacobi coordinates should be generalized to the unequal-mass case.The center-of-mass of particles 1 and 3, which appears in the two-body contact condition (14), The continuity equation and the probability current are still given by (55,56) provided we define While in the mass-balanced case, there was a single exponent s associated to the unitary three-body problem, in the mass-imbalanced case we have two exponents s and s, associated respectively to the ↑↑↓ and ↑↓↓ unitary three-body [97,176].As a function of m ↑ /m ↓ , s is continuously decreasing and vanishes at the Efimov-effect threshold m ↑ /m ↓ = 13.6069 . . ., while s is continuously increasing and tends to 2 for m ↑ /m ↓ → ∞.
D.1. Extension of the relations from the mass-balanced case
The number of nearby ↑↑↓ triplets is still given by whereas the number of nearby ↑↓↓ triplets is now given by For our convention m ↑ > m ↓ , we have s < s.Hence the total number of nearby triplets N 3 (ϵ) is dominated by the ↑↑↓ contribution, so that The remarks at the end of Sec.2.1 remain valid.In particular, the bunching effect due to the zerorange interactions still overcompensates the antibunching effect due to Pauli exclusion, both for N 2,1 (ϵ) and N 1,2 (ϵ), because s < s < 2.
When three particles of spins ↑, ↑, ↓ approach each other, the asymptotic behavior of the manybody wavefunction is still given by (15).This yields [using the Jacobian |∂(r 1 , r 2 , r 3 )/∂(C, R)| = cos 3 When three particles of spins ↑, ↓, ↓ approach each other, the asymptotic behavior of the manybody wavefunction is given by ( 18) with s replaced by s, i.e.
This yields The triplet correlation functions satisfy The leading large-momentum tail of NP (K ) is given by where M is still given by the expression (24) in terms of N , and N now stands for the normalization constant of the unitary hyperangular wavefunction of the ↑↑↓ problem. 38 The two-particle momentum distribution has the tail lim The expression (71) for the ↑↑↓ three-body loss rate remains valid.In the expression (72) for the ↑↓↓ three-body loss rate, s is replaced by s Since the ↑↓↓ three-body parameters a (m) 1,2 are now of order b 2 s whereas the ↑↑↓ three-body parameters a (m) 2,1 are of order b 2s , the ↑↓↓ three-body loss rate Γ 1,2 ∝ b 2 s is negligible (in the zero-range regime) compared to the ↑↑↓ three-body loss rate Γ 2,1 ∝ b 2s .Hence The expression (81) for the two-body contribution to the energy correction remains valid.For the three-body contribution to the energy correction, we obtain because the leading-order contribution again comes from ↑↑↓ triplets. 39 A peculiar situation takes place for s < 1/2 (i.e. for m ↑ /m ↓ > 12.313 . . .): δE 3 ∝ b 2s dominates over δE 2 ∝ b ; i.e., the finite-range correction mainly comes from configurations with three nearby particles, rather than two nearby particles.This was already pointed out in [175] at the level of the third virial coefficient.Few-body and many-body numerical computations are often performed with finite-range interactions and extrapolated to the zero-range limit, see e.g.[146,185] and [28,62,65,67,111,112,173,174,[186][187][188][189][190][191][192] respectively; to accurately perform such b → 0 extrapolations in the regime s < 1/2, it may be important to include the b 2s scaling. 40,41 38 The contribution from the ↑↓↓ three-body problem gives rise to a higher-order subleading tail of NP (K ), given by C 1,2 K −2 s−4 ×32 π 3 4 s 3 − s−1/2 ( s +1) Γ( s +2) 2 sin 2 ( sπ) Ñ 2 with Ñ the normalization constant of the unitary hyperangular wavefunction of the ↑↓↓ problem. 39Indeed, the contribution to δE 3 from ↑↑↓ triplets of nearby particles, given by the r.h.s. of (126), is ∝ b 2s , whereas 40 Naturally, the ∝ b 2s scaling may be hard to distinguish from the regular ∝ b scaling if s is close to 1/2 (see e.g.Fig. 9(b) of [146]). 41We take this opportunity to recall two other subtleties relevant to zero-range extrapolations (i.e. to continuum extrapolations in the case of lattice models, where the interaction range is set by the lattice spacing).Being due to breaking of Galilean invariance, they arise for any mass ratio.The first subtlety, reported in [38,56,187], is that for lattice models (where the interaction range b is set by the lattice spacing), the ∝ r e term given by the r.h.s. of ( 81) is not the only ∝ b
D.2. Relations within the generalized zero-range model
In this Section, we assume that m ↑ /m ↓ is larger than 8.619 . . ., so that s < 1.This ensures that a three-body wavefunction diverging as R −s−2 at small R remains square integrable.This allows one to define a generalized zero-range model (GZRM) by supplementing the two-body contact condition ( 14) (involving the two-body scattering length) by the following three-body contact condition (involving the three-body parameters): There exists B m such that in the limit R → 0 where particles 1,2,3 approach each other, while keeping fixed their hyperangles Ω, their center-of-mass C, and the positions of the other particles r 4 , . . ., r N .By antisymmetry, Eq. ( 127) imposes a similar condition when any triplet of particles with spins ↑↑↓ approach each other.Apart from the two-body and three-body contact conditions (14,127), the GZRM is defined (like the standard zero-range model) by the Schrödinger equation without any interaction potential (113).Note that for vanishing three-body parameters, the GZRM reduces to the standard zero-range model (ZRM).
Remarks:
• If the underlying microscopic interactions are rotationally-invariant, then a (m) 3 does not depend on m.
• In the present GZRM, in the limit where three particles of spins ↑↓↓ approach each other, the asymptotic behavior of the wavefunction has the same form (120) than for the ZRM.
In other words, the behavior ψ ∝ R −2− s is forbidden.Such a wavefunction would not be a normalizable at small R, since s is always larger than one. 42This fact does not prevent one from rederiving the relations (125,126), which come from configurations with nearby ↑↑↓ triplets.• Extensions of the GZRM to the regime s ≥ 1, which were introduced recently [142], are beyond the scope of this work.• For mathematically rigorous studies of the GZRM in the three-particle case, see [201,202].contribution to the two-body finite-range energy-correction δE 2 : There is a second contribution to δE 2 , proportional to a parameter R e (that parameter R e quantifies the dependence of the two-body vacuum T-matrix on the center-of-mass momentum, which arises from lattice-induced breaking of Galilean invariance).Therefore, if one wishes to cancel the ∝ b term in the b → 0 continuum extrapolation, one needs to tune to zero not only r e (as done in [173,174,193] and for one of the dispersion relations considered in [112]) but also R e (and if the dispersion relation has a cusp at the edge of the Brillouin zone, there is a third ∝ b contribution to δE 2 , in a finite box with periodic boundary conditions [56]).The second subtlety, reported in [56] and further evidenced in [191], arises if one restricts single-particle momenta to a ball of radius Λ (with Λ = π/b for lattice models, with b the lattice spacing), i.e. if one takes a dispersion relation equal to +∞ for momenta larger than Λ, as was done in [188,[194][195][196][197][198][199][200]: The universal zero-range model is not obtained in the limit Λ → ∞, because the hard cutoff induces a dependence of the two-body T-matrix on the center-of-mass momentum, and this dependence surprisingly survives for Λ → ∞.We expect the same problem in [173,174] where such a spherical cutoff was also used. 42Hence there are no ↑↓↓ three-body parameters within the GZRM, which is why we denoted the ↑↑↓ Here the derivative w.r.t. a (m) 3 is taken at fixed value of a 2 and of the other a (m ′ ) 3 with m ′ ̸ = m.
The three-body contacts C (m) 2,1 are still defined by (119).Remark: Within the ZRM, the derivative of the energy w.r.t. a 2 is given by C 2 , see (84).Within the GZRM, this relation also holds, with d E /d (−1/a 2 ) replaced by the partial derivative ∂E /∂(−1/a 2 ) taken at fixed a (m) 3 .This can be justified by using the derivation presented in Sec.IV.C of [56] (at fixed three-body parameters, there is no additional contribution to the energy variation coming from nearby triplets of particles, as we will see below).
On the other hand, from the Schrödinger equations for ψ and ψ, As we will see below, there is a contribution δ 0 to δ coming from the configurations where particles 1, 2 and 3 are close to each other.By symmetry, there is an identical contribution from the configurations where any set of three particles with spins ↑↑↓ are close to each other.Hence δ equals δ 0 times the number of such three-particle sets, N ↑ (N ↑ − 1)N ↓ /2.To evaluate δ 0 , we only need to keep the terms i = 1, 2, 3 in Eq. ( 129), which gives, after the change of coordinates (r 1 , r 2 , r 3 ) −→ (C, R, Ω), where F := R 2 ψ, F := R 2 ψ, and T Ω is defined by (96).Since the two-body scattering length is the same for ψ and ψ, we only need to keep the terms involving derivatives with respect to R in Eq. ( 130). 44Transforming the integral over R into a boundary term at R → 0, we then get The result (128) follows by using the three-body contact conditions [given by Eq. ( 127) for ψ, and the same condition with ã(m)
To derive this relation, we proceed very similarly to the bosonic case of [72].We consider a stationary state of the GZRM, i.e. a solution ψ(r 1 , . . ., r N ) of the stationary Schrödinger equation (113) together with the two-body and three-body contact conditions (14,127).The corresponding solution of the time-dependent Schrödinger equation is Ψ(r 1 , . . ., r N ; t ) = ψ(r 1 , . . ., r N )e −i E t /ħ , and d 3 r 1 . . .d 3 r N |Ψ(r 1 , . . ., r N ; t )| 2 , with ψ normalized to unity.Excluding from the integration domain the regions where R i j k < ϵ and taking the limit ϵ → 0, the continuity equation and the three-body contact conditions leads to the final expression, where all extra mass-ratio dependent factors, arising e.g. from (115), divide out.
D.3. Validity conditions
Let us denote by s j ↑ , j ↓ the scaling exponent of the unitary ( j ↑ + j ↓ ) body problem with j ↑ particles of spin ↑ and j ↓ particles of spin ↓ (so that s 2,1 ≡ s and s 1,2 ≡ s ).We expect the relations (116,118,123,124) to be valid under the condition Min s j ↑ , j ↓ ; j ↑ ≥ 2, j ↓ ≥ 1, j ↑ + j ↓ > 3 > s.Indeed, we expect a contribution from the ( j ↑ + j ↓ ) body problem to N 2,1 (ϵ) at small ϵ [resp.to NP (K ) at large K ] scaling as ϵ 2s j ↑ , j ↓ +2 resp.1/K 2s j ↑ , j ↓ +4 , which dominates over the contribution from the three-body problem when s j ↑ , j ↓ < s.Similarly, we expect relation (117) to be valid under the condition Min s j ↑ , j ↓ ; j ↑ ≥ 1, j ↓ ≥ 2, j ↑ + j ↓ > 3 > s.The former condition breaks down when m ↑ /m ↓ is near the thresholds where the four-body [102] and five-body [103] Efimov effects appear, where s 3,1 and s 4,1 become small.Based on existing data, we expect that both of the above validity conditions are satisfied at least in the range m ↑ /m ↓ ≤ 10.Indeed, in this range, we have s 3,1 > s and s 4,1 > s [103], s 5,1 > s [104], s 2,2 > s [146], s 1,3 > s [103], and the trends of available data suggest that the conditions will also hold for larger values of j ↑ or j ↓ .We conservatively restricted to m ↑ /m ↓ ≤ 10 because s 2,2 was not computed beyond this range, but the conditions s 3,1 > s and s 4,1 > s actually hold up to at least m ↑ /m ↓ = 13.2 [103].
Appendix E. Generalization to statistical mixtures and non-stationary states
Many of the relations derived for stationary states in the main text and in Appendix D are directly generalizable to non-stationary states and statistical mixtures, similarly to the relations involving the two-body contact [35,36,56].Indeed, Eqs. (15,17,18,19) remain valid for any non-pathological linear combinations of stationary states (including solutions of time-dependent problems, where a 2 and the trapping potential U can depend on time); and relations (3,5,6,11,12,23) remain true for arbitrary non-pathological statistical mixtures of such pure states (including the case of thermal equilibrium).Here, non-pathological means that the occupation probabilities of stationary states should decay sufficiently quickly (which includes the simple case where only a finite number of states are populated); more specifically, for a pure state, it is necessary that Eqs.(14,48) still hold, while for a statistical mixture ρ = i c i |ψ i 〉〈ψ i |, it is necessary that the three-body contact of the mixture [defined by Eq. (3)] equals i c i C 3,i with C 3,i the three-body contact of state ψ i .Moreover, at thermal equilibrium, the thermally averaged loss rates are given by (42,43,44) for simple interactions; for more general interactions they are given by (71,72), and by (79) for a homogeneous unpolarized gas.Furthermore, (82,83) remain valid in the canonical ensemble, with δE 3 the energy difference (between finite-range and zero-range models) taken at fixed entropy (which equals the free-energy difference at fixed temperature).
As for the wavevector k typ that appears in the validity conditions, it should be defined by the same procedure as before and then taking the maximum over all significantly populated eigenstates; for example for the balanced unitary gas at thermal equilibrium, k typ = max(k F , k T ) with k T the thermal wavevector, defined by ħ 2 k 2 T /(2m) = k B T .
14
Typically, b is set by the true range b 0 of the potential (i.e. the length such that V 2 (r ) decays quickly for r ≫ b 0 ).More generally, b = Max(b 0 , |r e |) where r e is the effective range.
, D would govern the asymptotic behavior of the three-body zero-energy wavefunction at distances ≫ |a 2 | (while a 3 governs the regime of distances ≪ |a 2 | and ≫ b) and Γ 3 would have a simple expression in terms of Im D in the weakly interacting regime k typ |a 2 | ≪ 1 (whereas (42) remains valid in the strongly correlated regime k typ |a 2 | ≳ 1).On the other hand, a 3 is only defined for resonant interactions (|a 2 | ≫ b) while D also exists for non-resonant interactions.
By the divergence theorem, the integral over C of the ∇ ∇ ∇ C term vanishes, while the integral over R of the ∇ ∇ ∇ R term yields the result Q = − 3 3 4 d 3 5
2 | ≫ b 2 (
with a 2 and b 2 the scattering length and the range of V 2 ).The three-body interaction potential is assumed to have a finite range b 3 , in the sense that it decays quickly at hyperradii larger than b 3 .We consider the zero-range regime whereb := Max(b 2 , b 3 )is much smaller than |a 2 | and 1/k typ , where 1/k typ is defined as the smallest scale of variation of the wavefunction ψ in the region where all interparticle distances are ≫ b.For alkali atoms near an open-channel dominated Feshbach resonance, b 2 is set by the van der Waals length[159], which is ≪ 1/k typ in typical cold-atom experiments.Instead of a single three-body parameter a 3 , there are in general six three-body parameters a (m) 2,1 and a (m) 1,2 , with m ∈ {−1, 0, 1} the angular momentum quantum number.
D. 2 . 1 .
three-body parameters by a (m) Derivative of the energy with respect to the three-body parameters Within the GZRM, the derivatives of the energy w.r.t. the three-body parameters are given by the three-body contacts, | 19,654.6 | 2022-11-17T00:00:00.000 | [
"Physics"
] |
Further Investigation of Intensity Noise Reduction on an Incoherent Light Source using a Gain Saturated Semiconductor Optical Amplifier in a Spectrum-sliced Channel at 2 . 5 Gb \ s 1
The “spectrum-slicing” technique, employing incoherent light, has been shown to be a highly practical, cheap and hence very attractive proposal for future all-optical networks. In this study, the use of semiconductor optical amplifier gain saturation for intensity noise reduction on incoherent light is further studied in terms of the Relative-Intensity-Noise (RIN), with a view to obtaining the optimum SOA injection current and input power conditions to achieve the best possible intensity noise reduction, in conjunction with OSNR, BER and Q-factor results. The results reported herein give designers knowledge of the best SOA operating conditions to enhance overall system performance, whilst still obtaining signal gain from the SOA.
INTRODUCTION
In present times, developing the capacity of optical fiber networks has become imperative to meet the rapid recent growth in network capacity requirements.Industry's hunger for network capacity grows daily and fueling this hunger are social networking companies like Facebook and Verizon, which create excessive demands on network capacities that far exceed today's availability (Zabinski et al., 2013).Additional schemes are therefore needed in today's bandwidth hungry world.One such scheme is "Spectrum-Slicing."(SS).This has been proven as a promising method for generating wavelength channels as optical carriers in which an incoherent light source, such as the Amplified Spontaneous Emission (ASE) from an Erbium-Doped Fiber Amplifier (EDFA) or other broadband source, is spectrally sliced utilizing a bandpass filter (Connelly et al., 2005).Spectrum-slicing therefore introduces a practical and highly cost-effective solution, utilizing one common light source as opposed to expensive, multiple transmitter lasers operating at different wavelengths and which can be exploited in Wavelength Division Multiplexing (WDM) systems (Lee et al., 1993).
The technique, however, has one key drawback-the inherently high excess intensity noise from the incoherent light source used-thus affecting system performance and manifesting from square-law characteristics of the photo detection process.System signal quality implementing spectrum-sliced sources have been enhanced by running at low data rates, or alternatively by widening filter channel bandwidth (Yamatoya et al., 2000), consequently sacrificing system capacity utilization and increasing dispersion arising from the wide spectrum slicing filter.However, performance may also be improved by exploiting the nonlinear gain compression effect of a saturated Semiconductor Optical Amplifier (SOA) included in the set-up to reduce the excess intensity noisesometimes by up to 10 dB at 600 Mb⁄s in Kim et al. (1999), 16 dB in (Tariq and Forsyth, 2014) and up to 25 dB in (McCoy et al., 2005).A practical example of the use of the technique has recently been shown in the medical application area of Optical Coherence Tomography (OCT) to improve the resolution (Lee et al., 2011) and reduce the intensity noise (Shin et al., 2010).In the latter work, the ASE noise from a Super Luminescent Diode (SLD) source was reduced by amplification using a gain-saturated SOA by 9 dB when measuring the Relative-Intensity-Noise (RIN).They achieved this lower noise for broadband OCT light sources by using a very basic and simplistic design.This result has impacted towards making practical, economical and low-noise SLD based sources for applications in OCT today.
Research into SS techniques and for reducing the associated intensity noise, are today still far from Fig. 1: SOA gain characteristics declining.Zaineb et al. (2012), it was shown that an ultra-narrow (≈0.01 nm) spectrum-sliced incoherent light source permitted the transmission of 10-Gb/s NRZ signals over 20 km SSMF and 0.2-nm-bandwidth optical filter without any dispersion compensation.In Qikai and Hoon (2014), there was shown a novel offset optical filtering technique to reduce the excess intensity noise of the ultra-narrow spectrum-sliced incoherent light source using a gain-saturated SOA running at 12 Gb/s.The offset optical filtering converted the chirp into amplitude to produce destructive interference within the inherent excess intensity noise and therefore improved signal quality.In work done in Kharraz et al. (2013), it was reported on the use of SOA gain compression for achieving intensity noise reduction in light from an incoherent broadband source, running at high data rate of 10 Gb⁄s in a narrow spectrum-sliced high-intensity channel of 20 GHz (about 0.16 nm) bandwidth.Data was collected on the performance of the single SOA as noise reducer at various input powers and biases.Improvements of around 20 dB in the RIN were seen here, together with commensurate improvements in both the Signal-to-Noise Ratio (OSNR) and quality factor (Q).
In this study, we improve on our previously published results (Tariq and Forsyth, 2014) by presenting a further optimization scheme for a saturated SOA-based intensity noise reducer which shows the corresponding enhancements in the RIN as well as the OSNR, Bit-Error-Rate (BER) and Q factor.
SOA Characterization:
Figure 1 shows the SOA gain response to increasing input powers plotted using ASE output from a 0.48 nm spectrum-slicing band-pass filter (Fig. 2).The system utilizes no additional EDFA and we reach SOA saturation input power by increasing the amplitude in the D.C. bias generator in the ASE source.The unsaturated gain observed is around 26 dB, until the point about -20 dBm input when the gain starts to saturate quite sharply, decreasing almost linearly with increasing input power (Johni et al., 2014).
SYSTEM SET UP AND RESULTS
A block diagram of the set-up we used is shown in Fig. 2.An ASE broadband source is spectrally filtered using a 80 GHz (~0.48 nm) band-pass Bessel filter, centered at 193.1 THz (1552.5 nm).The SOA input power is controlled by an optical attenuator.The preferred channel is modulated using a Mach-Zehnder modulator, with NRZ data at 2.5 Gbits\s.The modulated signal is then filtered with a 100 GHz (~0.8 nm) band-pass Bessel filter.Finally, a PIN photo detector is inserted followed by a 7.5 GHz low-pass Bessel filter.WDM, RF spectrum and eye diagram analyzers are all attached to characterize the noise reduction performance in terms of the OSNR, RIN and Q Factor, respectively.Figure 3 shows the Q factor response to increasing input powers for 4 different SOA bias currents.At all biases, the Q-factor is seen to increase continuously with input power until the point around 0 dBm input, where it reduces slightly.From Fig. 1 it was seen that at 0 dBm input power the SOA is operating in the highly saturated regime.Interesting to note that increasing the SOA injection current beyond 0.15 A (near the typical SOA model operating point) does not offer any further benefit.Figure 4 shows the Q-factor values obtained when the SOA was removed, i.e., without the noise reducing SOA in the system.The maximum value obtained is clearly unacceptable, around 3.7 dB at 0 dBm input power and does not improve after this.The Q-factor was also at a maximum at 0 dBm SOA input power in Fig. 3. From Fig. 5 when we measure the Q-Factor with the SOA in the system we reach the value of around 11.5 dB.We can therefore clearly see the effect of the inclusion of the SOA on system performance.Around 0.15A is again the optimum bias here at 0 dBm total input power to SOA-in full agreement with Fig. 3. Figure 6, the OSNR obtained is plotted against SOA input power at various biases.The OSNR at all biases up to 0.25A is shown to continuously increase with increasing SOA input powers.An OSNR enhancement around 20 dB is estimated at about 0.15 A bias, when the input power is increased from -15 dBm to -3 dBm.
Figure 7 plots the BER as a function of input power and SOA bias current.It shows that the best BER occurs around -5 dBm input power to the SOA, at all three biases used in the figure.The best BER occurs at 0.15A bias.This bias is in full agreement with Fig. 3a, 5 and 6.
Figure 8, the Relative-Intensity-Noise (RIN) as a function of SOA input power (dBm) at 0.15A drive current is plotted.We see that as the inserted input power into the SOA increases, the noise suppression continuously increases almost linearly.The lowest RIN is measured to be around -103.4 dB/Hz.We also can observe that SOA bias current at all input powers in the saturation regime enhances the noise reduction.To quantify the intensity noise reduction, it can easily be seen from Fig. 8 that around 25 dB in the RIN is suppressed in the SOA input power range from ~ -20 dBm to ~ +10 dBm at 0.15A drive current, then only a Figure 9 plots the RIN as a function of SOA bias current at the nominal input power of 0 dBm.It can be seen that as the SOA drive current increases from 0 to 0.05A, the RIN improves dramatically from around-116 to -131 dB.Here, an improvement of around 15 dB in this measurement parameter is observed, which still holds out around 0.15A bias.Hence, in agreement with previously measured Q, OSNR and BER, we infer that around 0.15A SOA bias current would achieve the optimum intensity noise reduction.However, as seen from previous figures, when the SOA enters to very deep saturation region, all metrics grow to be stabilized and even infrequently worsen marginally at very high SOA biases and insertion powers.We also can infer from Fig. 9 that increasing the bias current to more than 0.25A may not amount to any more changes in the RIN.
CONCLUSION
In this study, using a new parameter of RIN, we have investigated further the use of SOA gain compression to assess the impact of intensity noise reduction of light from an incoherent broadband source for future spectrum-sliced systems, for improving signal quality for overall system performance.Based on our new RIN measurements combined with previous OSNR, BER and Q factors, we have concluded that around 0 to -5 dBm input power and that around 0.15A bias current are still most probably the best optimum operating conditions in which to run the SOA.The results reported herein will give designers knowledge of the best SOA operating conditions to enhance overall system performance, whilst still obtaining some signal gain from the SOA.
Fig. 4 :
Fig. 4: (a): Q factor as a function of input power before SOA; (b): Q factor = 3.7 dB at 0 dBm input power slight improvement is inferred when greater input powers are inserted.Figure9plots the RIN as a function of SOA bias current at the nominal input power of 0 dBm.It can be seen that as the SOA drive current increases from 0 to 0.05A, the RIN improves dramatically from around-116 to -131 dB.Here, an improvement of around 15 dB in this measurement parameter is observed, which still Fig.5: Q factor at 0 dBm input power with varying SOA injection current | 2,457 | 2016-04-05T00:00:00.000 | [
"Physics",
"Engineering"
] |
The Gestation Delay : A Factor Causing Complex Dynamics in Gause-Type Competition Models
In this paper, we consider a Gause-type model system consisting of two prey and one predator. Gestation period is considered as the time delay for the conversion of both the prey and predator. Bobcats and their primary prey rabbits and squirrels, found in North America and southern Canada, are taken as an example of an ecological system. It has been observed that there are stability switches and the system becomes unstable due to the effect of time delay. Positive invariance, boundedness, and local stability analysis are studied for the model system. Conditions under which both delayed and nondelayed model systems remain globally stable are found. Criteria which guarantee the persistence of the delayed model system are derived. Conditions for the existence of Hopf bifurcation at the nonzero equilibrium point of the delayed model system are also obtained. Formulae for the direction, stability, and period of the bifurcating solution are conducted using the normal form theory and center manifold theorem. Numerical simulations have been shown to analyze the effect of each of the parameters considered in the formation of the model system on the dynamic behavior of the system. The findings are interesting from the application point of view.
Introduction
Persistence, structure, and dynamics of biological species have received great attention from ecologists as well as mathematical biologists.The interspecific interference such as competition and predation dominates the dynamics of the system.Competitive exclusion principle by Gause states that when two species compete for the same resources within the environment, one of them will eventually outcompete and displace the other.The displaced population may become locally extinct by either migration or death, or it may adapt to a sufficiently distinct niche within the environment [1].Upadhyay and Iyengar [2] studied the extinction and coexistence of two competing prey species of a Gause-type ecological model in which the predator species is influenced by the damage effect caused by crowding from the members of its own population.
In ecosystem modeling, the type of predation is very important, and consecutively, the dynamics become richer with more realistic assumptions [3].The functional response (rate of prey consumption by an average predator) is one of the important features that govern the dynamics of the prey-predator system.Holling type II saturated functional response has been widely used to investigate the behavior of the prey-predator system.In real situation, conversion of prey biomass into predator biomass does not take place instantaneously.There is always a time lag of gestation.To make the system considered in [2] more realistic, we introduced the gestation period as a time delay and studied the effect of delay on the dynamics of the model system.Kar and Batabyal [4] have studied a two-prey one-predator system in the presence of time delay and derived the conditions for persistence and global stability of the model system.Recently, Deka et al. [5] considered a general Gause-type two-prey one-predator model system and studied the effect of predation on two competing prey species.The authors observed Hopf bifurcation by taking the death rate of the predator population as a bifurcation parameter.Jana et al. [6] also studied a two-prey onepredator model system in which the prey species is divided into weaker and stronger classes due to predator's catching efficiency.The evolutionary effects of selective disturbances on an evolving trait (e.g., body size and maturation age) of the predator individuals in a one-predator two-prey community are investigated by Meng et al. [7].Using methods of adaptive dynamics, they constructed an invasion fitness function and obtained the conditions for evolutionary branching and stability under selective disturbances in both monomorphic and dimorphic populations.Lu et al. [8] developed a rigorous analysis to investigate how the dual phenomena of prey infection and hunting strategies of the predators influence the disease dynamics in a predatorprey model with infected prey and stage-structured predator.A three-species model system is also studied by Gakkhar and Gupta [9] in which two competing prey species grow logistically but the third species acts as a predator as well as host, and they observed that the possibility of survival of all three species is due to commensalism of a prey and predator population.The dynamic behavior of a twoprey one-predator system with impulsive effects concerning periodic biological and chemical control is investigated by Pei et al. [10], and it was observed that there exist asymptotically stable pest-eradication periodic solutions if the impulsive period is less than some threshold.Wang et al. [11] studied a two-prey one-predator system with Watttype functional response and impulsive control strategy and established conditions for extinction of prey and predator population.The problem of adaptive control and chaotic behavior of a continuous-time two-prey and onepredator system is studied by El-Gohary and Al-Ruzaiza [12] with nonlinear feedback.
Many different food chain models including three species are constructed.Das et al. [13] studied chaotic dynamics of a three-species competition model with noise.Dubey and Upadhyay [14] studied the interaction of two predators competing for a single prey species with ratiodependent-type functional response and established the conditions for persistence and extinction of the species.Song et al. [15] explored the combined influence of multiple delays with two different scenarios, i.e., the intrinsic growth rate of the resource is less/more than those of the consumer and predator in a three-species model with consumer-eatresource and predator-eat-consumer.The significance of top predator interference and gestation delay is studied by Jana et al. [16] on a three-species food chain model involving intermediate and top predator populations, and they observed the subcritical Hopf bifurcation phenomena.Wang and his group explored the dynamics of different stochastic epidemic models like SIRS and FIV with different infectious forces under intervention strategies and environmental variability [17][18][19][20].
Motivated by the above works, we explain the dynamical behavior of the undertaken delayed model system and study the effect of control parameters with time delay on the dynamics of the model system.Since the model represents the very common food chain model of realworld systems, we are taking one of them as an example-a species found most throughout North America and from southern Canada to central Mexico, known as bobcats.Its main prey are rabbits and squirrels, which depend on grass and leaves.Kittens may be taken as prey by adult bobcats when prey levels are low, but this is very rare and does not influence much the dynamics of the population [21].Some species like golden eagle and coyotes used to predate on bobcats.Diseases, accidents, hunters, and starvation are the leading causes of death of the species.A bobcat is able to survive for long periods without food but will eat heavily when prey is abundant and used to store their hunted prey sometimes to eat later.The gestation period of the species is generally from 60 to 70 days.We study this time lag by introducing the time delay.In our work, interspecific competition coefficient of both prey species along with the intraspecific interference coefficient of the predator is considered.The effect of all of these parameters is studied.We have considered Holling type II functional response as predation of both the prey and studied the effect of predation rates b 13 , b 23 with the conversion rate of prey into predator β 31 , β 32 on the model system.The death rate of predators is also affecting the dynamics of the model system significantly.All these parameters are considered in numerical simulations to show the behavior and changes in the dynamics of the model system.
The rest of the organization of this paper is as follows: In Section 2, the model system is formulated.Also, positive invariance and boundedness have been shown in this section.Stability analysis for both delayed and nondelayed model systems has been made in Section 3. The conditions for the existence of Hopf bifurcation are also performed here.Next, Section 4 is devoted to establish the formulae to determine the direction, period, and stability of Hopf bifurcation of the bifurcating periodic solution and some numerical simulations are carried out to illustrate the analytical results in Section 5. Finally, we end the paper with a brief conclusion and discussion in Section 6.
The Mathematical Model
Consider a Gause-type model to draw the scenario of an ecological system where a predator (bobcats) species X t predates two competing prey species N 1 (squirrels) and N 2 (rabbits) (see Figure 1).It is well known that gestation delay, as a time lag, plays a complicated role on the 2 Complexity dynamics of the prey-predator system.Thus, the considered system can be governed by the following delay differential equations [2,5]: with the initial conditions where ϕ −τ, 0 → ℜ 3 with the norm Here, all the parameters are positive.r 1 , K 1 , α 12 , r 2 , K 2 , α 21 are the intrinsic growth rate, carrying capacity, and interspecific competition coefficient of the prey species N 1 , N 2 , respectively, b 13 , β 31 , b 23 , β 32 are the feeding rates per predator per unit prey consumed, assimilation, or conversion efficiency of the predator of the prey species N 1 , N 2 , respectively, and δ, γ are the death rate and intraspecific interference coefficient of the predator species X in the absence of the prey.The physical interpretation of a i is that a i = 1/A i , where A i is the half saturation constant of the prey species N i .The values of all the parameters are positive.
Positive Invariance.
It is necessary to insure that the species always survive.For this, we need to show that the system is positively invariant.Theorem 2.1.All the solutions of (1) are positive with positive initial conditions.
Boundedness
Theorem 2.2.For any positive solution ϕ t = N 1 t , N 2 t , X t of system (1), there exists a T > 0 such that 0 < N 1 t ≤ M 1 , 0 < N 2 t ≤ M 2 and 0 < X t ≤ M 3 for t ≥ T, where Proof.Obviously, from Theorem 2.1, N 1 t , N 2 t , X t remains nonnegative.So, we need to show that From the first equation of model system (1), we get Therefore, by using the standard comparison rule, we have Similarly, from the second equation of model system (1), we get Therefore, by using the standard comparison rule, we have Now, from the third equation of model system (1), we have 3 Complexity Therefore, by using the standard comparison rule, we have
Stability Analysis and Existence of Hopf Bifurcation
Model system (1) has seven possible nonnegative equilibrium points, namely, E 0 0,0,0 , Here, we are omitting the existence criteria of all these equilibrium points as it has been shown in Upadhyay and Iyenger [2].
3.1.Linear Stability Analysis.In this section, we will discuss the stability criteria of all abovementioned equilibrium points.
(i) The eigenvalues of the variational matrix around E 0 are r 1 , r 2 , −δ.Hence, E 0 is a saddle point (ii) The eigenvalues of the variational matrix around (iv) The predator-free equilibrium point is given as One eigenvalue of the variational matrix around and the other two eigenvalues are roots of the following equation: where We note that B > 0 under condition (12).Thus, the equilibrium point E 3 N 1 , N 2 , 0 is locally asymptotically stable if λ 1 < 0 and E 3 exists under condition (12).
Again, E 3 is unstable in the positive direction orthogonal to the N 1 − N 2 plane, i.e., the X direction if λ 1 > 0 (v) The equilibrium point is given as E 4 N1 , 0, X where X = 1/γ β 31 N1 /1 + a 1 N1 − δ and N1 is a positive solution of the following equation: where By Descarte's rule of sign, (17) has a unique positive solution One eigenvalue of the variational matrix around E 4 is and the other two eigenvalues are the roots of the following characteristic equation where Thus, the other two eigenvalues have negative real parts if It follows that the equilibrium E 4 N1 , 0, X is locally asymptotically stable if λ 1 (given by ( 20)) is negative and condition (23) holds.Also E 4 is unstable in the positive direction orthogonal to the N 1 − X plane, i.e., in the a positive solution of the following equation: where Similar to E 4 , it can be checked that the equilibrium Again, E 5 is locally asymptotically stable if X * , we discuss both the cases separately Case 1.When τ = 0.
The eigenvalues of the "J" matrix around E * are the roots of the following equation: where From the Routh-Hurwitz criterion, we can state the following theorem.
Separating real and imaginary parts, we have The following equation is obtained by some mathematical calculations As sin 2 ωτ + cos 2 ωτ = 1, from (38), we have where Without loss of generality, assume that it has at least one positive real root ω 0 , for which from (40), we have where k = 0, 1, 2, … , and then ±iω 0 are a pair of purely imaginary roots of (35) with τ k .Let us define Also, differentiating (34) with respect to τ, we get
Re dλ dτ
where Hence, the transversality condition is satisfied if U sin ωτ + V cos ωτ − D 2 ω 2 > 0. Thus, we can now state the following theorem.
Theorem 3.2.The equilibrium point E * of model system (1) is locally asymptotically stable when τ ∈ 0, τ 0 if the conditions of Theorem 3.1 hold and are unstable for τ > τ 0 .Furthermore, the system undergoes Hopf bifurcation at E * when the value of τ crosses through τ 0 provided that U sin ωτ + V cos ωτ − D 2 ω 2 > 0.
Global Stability Analysis
Theorem 3.3.The positive equilibrium point E * of system (1) is globally asymptotically stable provided that 6 Complexity where Proof.To prove that the equilibrium point E * x * , y * , z * of model system ( 1) is globally asymptotically stable, the method of Lyapunov function is used.We derive a sufficient condition which guarantees that E * is globally asymptotically stable.For mathematical convenience, we make the following transformations of the following variables: These coordinate change transforms the positive equilibrium E * into the trivial equilibrium x t = y t = z t = 0 for all t > 0. Due to the above transformations, model system (1) is reduced as follows: Computing the upper right derivative of V 1 t along the solution of (1), we get Again, let V 2 = y t .Computing the upper right derivative of V 2 t along the solution of (1), we obtain Now, let V 3 = z t .Computing the upper right derivative of V 3 t along the solution of (1), we obtain Again, due to the structure of (51), we consider the following functional: whose upper right derivative along the solution of system (1) is given by According to the assumptions of the theorem, we know that η = η ij 3×3 is an M matrix; hence, there exist positive constants h i i = 1,2,3 such that, Let us consider the Lyapunov function V defined by Then from (50), (51), and (54), we obtain It follows from (55) that, for all t ≥ T + r, we have Since system (1) is positively invariant, one can see that there exist positive constants m 1 , m 2 , m 3 and , and X * e z t = X t ≥ m 3 for t ≥ T * .Also, system (1) is bounded, and hence, there exist positive constants Using the mean-valued theorem, one obtains, where N * 1 e θ 1 t lies between N 1 t and N * 1 , N * 2 e θ 2 t lies between N 2 t and N * 2 , and X * e θ 3 t lies between X t and X * .
Letting α = min m 1 l 1 , m 2 l 2 , m 3 l 3 , then it follows that, for t ≥ T * , Noting that the Lyapunov function is such that Hence, by applying the method of Lyapunov function and (60), we can conclude that the zero solution of system (49) is globally asymptotically stable; therefore, the positive equilibrium of system (1) is globally asymptotically stable.The proof is complete.
Note.The positive equilibrium point E * will also be globally asymptotically stable for nondelayed model system (1).The conditions under which E * remains globally asymptotically stable are stated in the following lemma.Lemma 3.1.If the positive equilibrium point E * is locally asymptotically stable, then it is also globally asymptotically stable in the interior of the positive octant under the following conditions: where Proof.The proof of the lemma is referred to in Appendix.
Proof.We prove this theorem by the method of suitable Lyapunov function [22].Let the Lyapunov function for system (1) be where p 1 , p 2 , and p 3 are positive constants.Clearly, σ is a nonnegative C 1 function defined in R 3 + .Then we have By computing ψ at the boundary equilibria, we have If we choose p 1 and p 2 large enough so that p 1 r 1 + p 2 r 2 − p 3 δ > 0, hence, ψ is positive at E 0 .Also, if conditions (12), (19), and (23) satisfy, then p 2 L 1 + p 3 L 2 > 0 and p 1 L 3 + p 3 L 4 > 0, i.e., ψ is positive at E 1 and E 2 .And, if inequalities in (65) hold, then ψ is positive at all the remaining boundary equilibria.Therefore, system (1) is uniformly persistent, which follows from Theorem 3.12 of [22].The proof is complete.
Properties of Hopf Bifurcation
In this section, the formulae for determining the direction, stability, and period of the bifurcating periodic solutions of system (1) are presented here.The method that we used is based on the normal form theory and center manifold theorem as described in Hassard et al. [23] to determine the abovementioned properties of Hopf bifurcation.In the previous section, we noted that for the critical value of time delay, τ = τ 0 , the family of periodic solutions bifurcates from the steady state under certain condition.Without loss of generality, we can say that (35) has a pair of purely imaginary roots ±iω 0 at these critical values of τ and the system 9 Complexity undergoes Hopf bifurcation.Hence, for any root of (35) of the form λ τ = ν τ − iω τ , ν τ k = 0 and ω τ k = ω 0 and ∂ν/∂τ | τ=τ k ≠ 0. Let τ = τ 0 + μ, μ ∈ R, so that μ = 0 is a Hopf bifurcation value for the system.Define the space of continuous real-valued functions as , and x i t = x i τt for i = 1,2,3; delay system (1) then transforms to functional differential equation in C as
87
which gives where Proceeding in the same manner as Hassard et al. [23], we compute the coordinates to describe the center manifold C 0 at μ = 0. Let x t be the solution of (78) when μ = 0. Define z t = q * , x t , W t, θ = x t θ − 2Re z t q θ 90 On the center manifold C 0 , we have
91
where z and z are local coordinates for center manifold C 0 in the direction of q * and q * .We now consider only the real solution x t ∈ C 0 of (78), which gives where It follows from ( 90) and ( 91) that
94
so that 11 Complexity Now, from (92), we have Comparing the coefficient with (93), we have In order to compute g 21 , we need to compute W 20 θ and W 11 θ .
104
which in comparing the coefficients with (100) gives From (102), (105), and the definition of A, we have Note that q θ = q 0 e iω 0 τ 0 θ ; hence, Similarly, from (103), (106), and the definition of A, we have where and where η θ = η 0, θ .From ( 98) and (100), we get and where Using (108) and ( 113) in (111) and noticing that iω 0 τ 0 I − 0 −1 e iω 0 τ 0 θ dη θ q 0 = 0, 116 and 13 Complexity Solving this system for K 1 , we obtain 110).Furthermore, g 21 can be expressed by the parameters and delay term.Thus, we can compute the following values: which determine the qualities of bifurcating periodic solution in the center manifold at the critical value τ 0 .Now, we state the results in the following theorem.
Theorem 4. Suppose that μ 2 determines the direction of the Hopf bifurcation.If μ 2 > 0, then the Hopf bifurcation is supercritical and the bifurcating periodic solutions exit for τ > τ 0 .If μ 2 < 0, then the Hopf bifurcation is subcritical and the bifurcating periodic solutions exit for τ < τ 0 .β 2 determines the stability of the bifurcating periodic solutions.
The bifurcating periodic solutions are stable if β 2 < 0 and unstable if β 2 > 0 , and T 2 determines the period of the bifurcating periodic solutions.The period increases if T 2 > 0 and decreases if T 2 < 0.
Numerical Simulation
In order to study the dynamic behavior of model system (1), we consider a set of parameter values as With respect to the considered parameter set, the nonzero equilibrium point is calculated as E * 77 3512,38 32 01, 50 7297 .Also, corresponding with ω 0 and the critical value of time delay, τ 0 are given as 0 799559 and 0 20757, respectively.In Figure 2, we can see that the nondelay system is stable for this set of parameter values.Also, the system remains stable for τ = 0 2 < τ 0 = 0 20757 and becomes unstable for τ = 0 25 > τ 0 as τ 0 increases.The system bifurcates at the critical value of delay, i.e., at τ 0 .From Figure 2, we can also see the effect of time delay that destabilizes the system as it becomes larger than the critical value.Now, we are interested to study the effect of the other parameters as well on the dynamics of the model system.We found that the carrying capacity K 1 of the first prey N 1 does not affect the dynamics of the system (see Figure 3(a)) whereas the carrying capacity K 2 of the second prey N 2 changes the nature of the model system significantly.The system shows the limit cycle behavior from the stable state due to increase in K 2 (see Figure 3(b)).Now, we have plotted the graph to show the relation between these parameters (K 1 and K 2 ) and the critical value of time delay.In both the cases, we observed that the value of τ 0 decreases as K 1 or K 2 increases (see Figures 3(c) and 3(d)).It is interesting to note that the values of τ 0 vary in the same ranges of K 1 and K 2 .It is easy to check in Figure 4 that the system changes its dynamics as the value of time delay changes.In Figures 4(a) and 4(c), we can see that for some values of K 1 or K 2 , the system shows stable focus and for some other values, it shows the limit cycle behavior accordingly as the value of τ = 0 2 is less or greater than the respective value of τ 0 .Since τ = 0 3 and τ = 0 5 are more than the values of all τ 0 for the respective K 1 and K 2 taken in Figures 4(b) and 4(d), the system shows the limit cycle behavior for each cases.In the same manner, we can see that the growth rate of N 2 only affects the dynamics of the system but not of N 1 .System dynamics change from the stable state to the limit cycle state as we increase the value of r 2 but system remains unchanged when r 1 is changed (see Figure 5).Now, we are checking the effect with the time delay and it is easy to see by comparing Figures 5(c) and 5(d) with Figure 5(a).The system remains stable at τ = 0 15 while it turns into stable limit cycle when τ = 0 3 is taken for all the values of r 1 in both cases.While at τ = 0 2 and τ = 0 5 in Figures 5(e) and 5(f), resp., we observed that the system shows cyclic behavior for some r 2 and remains stable for other values of r 2 as in Figure 5(b).
Effect of Predation Rate of N 1 and Conversion Efficiency
of Predators b 13 , β 31 .In Figures 6(a) and 6(b), we can see that the system changes its behavior from stable focus to limit cycle as we increased the value of β 31 from 1 3 to 1 55 whereas b 13 does not have effect on the dynamics of the system; it remains at stable focus.Moreover, it is interesting to see the reverse effect of increasing β 31 and b 13 in a particular range on the critical values of time delay (see Figures 6(c) and 6(d)).The respective critical value increases with an increase of b 13 whereas it decreases with an increase of β 31 .Also, we observed that the decreasing value of β 31 maintains the system to the stable state for larger time lag.
But, the predation rate of N 2 and conversion efficiency of predator b 23 , β 32 do not follow any particular pattern for the respective critical values in any particular range.So, we study the effect of these parameters on the nondelayed model system only.In Figure 7(a), we can see that the system changes its behavior from stable focus to limit cycle as we are increasing the value of β 32 from 1 05 to 1 35.Also, due to the increase in b 23 from 0 15 to 0 45, the system loses its stability and shows limit cycle behavior (Figure 7(b)).We observed that the system becomes stable as we increase the death rate of the predator species (Figure 8(a)).As we increase δ from 5 1 to 5 4, the system becomes stable from limit cycle behavior.We also study the effect of intraspecific interference coefficient of the predator on the model system and found that the system becomes stable from limit cycle behavior as we increase the value (see Figure 8(b)).For a particular set of parameter values, the system shows limit cycle behavior for δ < 5 3 and becomes stable for δ ≥ 5 3. Similarly, to keep the system stable, we also need γ ≥ 0 005; otherwise, the system shows limit cycle behavior for γ < 0 005.
Conclusions and Discussions
In this work, we have studied a Gause-type two-prey onepredator model system.We have discussed the stability properties of equilibrium points for the model system including positive invariance and boundedness.Nonlinear stability analysis for both delayed and nondelayed model systems has been established.Conditions for the existence of Hopf bifurcation at the nonzero equilibrium point of the delayed model system have been obtained.Criteria for the system to be in uniform persistence have been derived, and it is observed that delay due to gestation does not affect uniform persistence of the system.The formulae for the direction, stability, and period of bifurcating solution have been calculated using the normal form theory and center manifold theorem.Numerical simulations are presented to verify the analytical predictions.It illustrates that the time delay destabilizes the system depending on its strength.We have studied the effect of all the control parameters on the dynamics of the model system.It has been observed in our simulation experiments that some of the parameters change the dynamics of the system either into a stable state or to an unstable state as explained in the following: (i) The conversion rate β 31 of prey N 1 into predator is sufficient to break the stability of the nondelayed model system as it crosses through β 31 = 1 49 (ii) To maintain the stability of the nondelayed model system, we need to take (for a particular set of parameter) intraspecific interference coefficient, γ ≥ 0 005, death rate of predator species, δ ≥ 5 3, and carrying capacity of the second prey population, K 2 ≤ 114 (iii) Time lag increases (up to which the system remains in a stable state) with increase in the predation rate of N 1 , 0 01 ≤ b 13 ≤ 0 5, whereas it decreases with an increase in the conversion rate of prey N 1 into predator, 1 2 ≤ β 31 ≤ 1 48 (iv) Time lag decreases with an increase in the carrying capacities of both prey populations (K 1 , K 2 ) in the range of 80-110, but time delay up to which the system remains stable is larger for decreasing K 2
Complexity
The rest of the parameters does not change the nature of the dynamical system much.We have also studied the effect of the growth rates of the prey population on the critical value of time delay and observed that the value of τ 0 decreases first up to a certain level for a particular value of r 1 and r 2 , and then it starts to increase due to the increase in the growth rate of the prey population.The population of the bobcats depends primarily on the population of its prey, and it has been found generally stable and healthy [24].Since the population is leading the wildcat species in the skin trade, the bobcat is legally hunted in small numbers.In some areas, they are quite rare, while in others, they have stable and sometimes dense populations.Hence, some states allow regulated hunting, while in others, they are protected.a regional level, the bobcat is totally protected in ten USA states, in Canada, hunting and trade is regulated, and in Mexico, hunting is regulated in five states [25].Using the above results, we can reach to the goal of making a healthy stable ecosystem and plan for regulating the bobcat population in North America and southern Canada.It is obvious to satisfy conditions (A.6) for the values of the constants c 1 and c 2 as given in the theorem.Further, conditions (A.5) are satisfied from conditions given in (62).Therefore, V is negative and definite under condition (62), and then V is a Lyapunov function with respect to E * .Hence, E * is globally asymptotically stable.
Figure 3 :
Figure 3: Phase plot diagram of nondelayed model system (1) for different (a) K 1 and (b) K 2 .Relation between the critical value of time delay and (c) K 1 and (d) K 2 .Rest of the parameters is taken as given in the text.
Figure 5 :
Figure 5: Phase plot diagram of nondelayed model system (1) for different (a) r 1 and (b) r 2 .Phase plot diagram of system (1) for different r 1 at (c) τ = 0 15 and (d) τ = 0 3. Phase plot diagram of system (1) for different r 2 at (e) τ = 0 2 and (f) τ = 0 5.The rest of the parameters is taken as given in the text.
Figure 7 :
Figure 7: Phase plot diagram of nondelayed model system (1) for different (a) β 32 and (b) b 23 .The rest of the parameters is taken as given in the text.
Figure 6 :
Figure 6: Phase plot diagram of nondelayed model system (1) for different (a) β 31 and (b) b 13 .Relation between the critical value of time delay and (c) β 31 and (d) b 13 .The rest of the parameters is taken as given in the text.
1 훾Figure 8 :
Figure 8: Phase plot diagram of nondelayed model system (1) for different (a) δ and (b) γ.The rest of the parameters are taken as given in the text. | 7,589.4 | 2018-11-07T00:00:00.000 | [
"Environmental Science",
"Biology",
"Mathematics"
] |
The Realization of Social Justice for the Poor Citizens According to Legal Philosophy
Pancasila is an ideology of Indonesia. One of the precepts of Pancasila is the principle of Social Justice for All Indonesians implies that all Indonesian people have the same position before the law. But nowadays, there have been many cases of injustice against the poor citizens. Therefore this research journal is about the realization of social justice for the underprivileged people in the philosophy of law, especially based on the theory named Critical Legal Studies.
Every principle in Pancasila contains different values. Especially in the fifth principle of Pancasila which reads "Social justice for all Indonesian people". The principle of social justice for all Indonesians shows that Indonesian people realize that they have the same rights and obligations. Justice itself contains the meaning of virtues related to human relations. 2 The consequences of social justice must be realized, such as elements of equality, equality and freedom. That for all Indonesian people, equality and equality must be realized before the law.
This day, talking about justice there is a lot of cases of injustice in law, especially for poor citizens. Poor citizens who are to be identical with the poor. In Article 34 of the Constitution which contains that the poor and neglected children are cared for by the state, the state develops a social security system for all people and empowers people who are weak and unable to comply with human dignity, the state is responsible for providing health care facilities and decent public service facilities. According to Law Number 13 of 2011 concerning Poor Management, Article 1 Number 1, the poor are people who have absolutely no source of search and / or have a source of livelihood but do not have the ability to fulfill basic needs that are appropriate for their lives and / or his family. The low standard of living causes the inability of individuals to fulfill both material and non-material needs.
Many poor citizens experience discrimination before the law. One of the cases that occurred was the case of 55-year-old Minah from Banyumas, only because she stole 3 cocoa fruits which cost no more than Rp. 10,000 sentenced to 1.5 years in 2009. In fact, for transportation costs to court, she had to borrow Rp. 30,000 because the distance from her house was quite far from the court. In addition to the case examples above there are still many other case examples that will be described in different chapters. This case example is certainly not in accordance with the implementation in Article 34 of the Basic Law which states that the poor are cared for by the state. Though the state should be responsible for this, especially in terms of law enforcement. Such cases should be concerned because this means that the values that contained in Pancasila regarding social justice for all Indonesian people are not realized in reaching all social layers. For the elaboration outlined, the author think that this problem really needs to be discussed so that people are increasingly aware of these events or something similar happens to the general public. Based on the title "The Realization of Social Justice for the Poor Citizens According to Philosophy of Law" the problem that the author wants to discuss is how is the realization of social justice for poor citizens according to Critical Legal Studies?
B. Method
The method used by the author is in the form of writing normative law. Normative legal writing is a process to find a rule of law, legal principles and legal doctrines to answer the legal issues of the rule of law of legal principles, as well as legal doctrines.
Justice
The original words for justice comes from the word "fair" from the Arabic al-adl, which means straight in the soul not being defeated by Eve talking lust, not as evil, balanced, equal and so on. In the United Kingdom, the term language of Justice, mentioned by various terms, such as: justice (translated: fairness, propriety, correctness and the judiciary), fairness (translated with honesty), equaty (translated: righteousness, According to fairness and reasonableness) and impartiality (translated with justice, impartial, honest attitude, the attitude of fairness, honesty and a neutral attitude). 3 In Kamus Besar Bahasa Indonesia (KBBI), justice is an adjective describing the noun or pronouns which has three meanings. Pertama, sama berat; tidak berat sebelah; tidak memihak. Kedua, berpihak kepada yang benar. Ketiga,sepatutnya, The oldest meaning of justice formulated by law experts at the time of the Romans in Latin "Justitia est constans et perpetua voluntas jus suum cuique tribuendi", translation in English is "Justice is the constant and perpetual will to render to each man what is his due". 5 The meaning of Justice from the Roman times is still influential in the Western in giving the understanding of Justice. According to A.I. Goodhart, it is generally accepted that the essence of the idea of Justice is "to render to each man what is his due". 6 I.P. Plamenatz expressed his opinion that the meaning of justice is "giving every man his due, and the setting, either by compensating the victim of wrong or by punishing the doer of it "(give everyone what must be, and correcting either by providing 3 compensation to the victims or to punish the perpetrators of that mistake). 7 In The Encyclopedia Americana (1973) give some definition of Justice that one of them is "the constant and perpetual disposition to render every man his due" or a fixed and eternal tendency to give each person what is necessarily. 8 On the basis of such understanding, then the main meaning of Justice is giving to each person what is appropriate. Next that needs to be explained further what is meant by "give what should be". According to Mortimer Adler, the concept of the giving to each person what is proper, contains two important prespectives that is different and each one cannot be derived from or returned at the other. The meaning of "what should be" for everyone can be determined with (1) the size of the rights of the person, rather it is a natural right or rights originating in the applicable law, and (2) a comparison of the capabilities or services from one person to someone else. Based on the two aspects of the right and the comparison, the sense of "what should be" for everyone have two forms, namely: a. Guarantee of the rights in order to be free from violations b. The treatment it deserves, i.e. treating the same things in the same way and things that are not the same in not equally balanced way with the inequality of that.
The Meaning of Poor Citizens
According to Kamus Besar Bahasa Indonesia (KBBI), masyarakat kurang mampu atau masyarakat miskin adalah masyarakat dalam keadaan dimana terjadi ketidakmampuan untuk memenuhi kebutuhan dasar seperti makanan, pakaian, tempat berlindung, pendidikan, dan kesehatan. Kemiskinan dapat disebabkan oleh kelangkaan alat pemenuhan kebutuhan dasar, ataupun sulitnya akses terhadap pendidikan dan pekerjaan. 9 About the benchmark of poverty, according to Sayogo is the amount of calories that are eaten per capita. Poverty is determined at the limit of 1700 calories a day per capita. This limit already is under the normal needs of the people of Indonesia who need more than 2000 calories per day. 10 According to the conclusions of the Research Centre for the study of Bandung Institute of technology in 1992, although Indonesia has abundant wealth, but in reality that as much as 61.6% of households classified as poor farmers in Indonesia with the criteria income of IDR 50,000.-(now estimated at Rp 500,000,-) or less per month per family. 11 Meanwhile, according to Alex Emyll, MSP (1992), the criteria for borderline poverty is a revenue of Rp. 20.000-or less per months per individuals.
According to BPS (2007), families that do not have the capacity to fulfill basic needs or people who have a source of livelihood but cannot fulfill their family needs that are appropriate for humanity with the following characteristics or criteria: 12 (i) Low expenditure or below the poverty line, which is less than Rp.175,324 for urban communities, and Rp.131,256 for rural communities per person per month outside of non-food needs; (ii) Education levels are generally low and have no skills; (iii) Not having a habitable residence, including not having a MCK; (iv) Ownership of assets is very limited in number or value; (v) Social relations are limited, not have been involved in many community activities; and (vi) Limited access to information (newspapers, radio, television and internet).
Justice Theory
Aristotles issued a theory of justice. In his theory, Aristotles put forward five types of actions which can be classified as fair. The five types of justice proposed by Aristotle are as follows: 13 a. Commutative Justice Commutative justice is the treatment of someone by not seeing the services that have been given. Example: A person who has made a mistake/violation, regardless of his position, he is still punished according to the mistake/violation he made. b. Distributive Justice Distributive justice is the treatment of someone according to the services they have given. Example: Some employees of a company earn different salaries, based on their tenure, class of rank, level of education, or the level of difficulty of their work. c. Nature's Natural Justice Nature's natural justice is giving something according to what is given by others to us. Example: A person who answers the greeting spoken by another person is said to be fair because he has received greetings from that person. Conventional justice is only if a citizen has obeyed all laws and regulations that have been issued. e. Justice Improvement Fair actions according to justice improvement are if someone has tried to restore the good name of others who have been polluted.
The Value of Justice in the Fifth Precept of Pancasila
The value of justice is found in the fifth principle of the Pancasila, namely the principle of social justice for all the people of Indonesia. According to Notohamidjojo, social justice demands that humans live properly in society. Each must be given a chance. Development and implementation of development do not only need to rely on and realize justice, but also propriety. The term human propriety can also be called fair or proportional propriety. 14 In the Precept of Social Justice for all Indonesian people contained the value of social justice, including: 15 1. Fair treatment in all aspects of life, especially in the fields of politics, economics and socio-culture.
2. The realization of social justice for all the people of Indonesia 3. Balance between rights and obligations 4. Respect other people's rights 5. The ideals of a prosperous society that are materially and spiritually equitable for all the people of Indonesia 6. Love of progress and development.
The fifth precept in Pancasila implies that every Indonesian society is aware of the same rights and obligations to create social justice in the lives of Indonesian people. For this reason, behavior was developed which reflected the attitude and atmosphere of family and mutual cooperation and needed a fair attitude towards others, maintaining continuity between rights and obligations and respecting the rights of others.
The principle of social justice for all Indonesian people contains values that every legal regulation, both law and court decision reflects the spirit of justice. Justice is one of the law purposes. Law must carry the value of justice. The purpose of law is to achieve a sense of justice in society. 16 The value of Justice must be applicable in people's lives, especially in the aspect of law enforcement. In the judicial process, the judge in deciding each case must 14 prioritize the value of Justice regardless of legal facts. Justice seeks to prevent the abuse of power by the stronger party against weaker parties.
Law is a tool to uphold justice and create social welfare. Essential justice is justice that is found in society. 17 Without justice as its goal, the law will fall into a means of justifying the arbitrariness of the majority or the authorities against minorities or those who are controlled.
The value of justice contained in the fifth Pancasila precept if it is associated with the theory of justice according to Aristotles, then the justice that is referred in this case is commutative justice someone must be treated the same way by not seeing the services that has been provided then in this case the judge must enforce the law. A case with no regard for its position but purely because of a mistake that has been made. The judge must decide according to the mistakes and violations made by the perpetrator. But in practice, this is not implemented in accordance with the expectations and conditions of the Indonesian people because there are still many cases of injustice against underprivileged citizens in terms of law enforcement by judges.
Critical Legal Studies
Critical Legal Studies is a movement that emerged in the 70s in the United States. 18 For the critical legal stream the law that used by modern law as a form of democracy and the market is a lie and never existed. The law for modern law has been built in with das sollen democracy same as responsive law, but das sein in the basis of the formation of formal law is the political arena through the procedure of attraction of the interests of those who are incorporated in that authority.
The critical legal studies tries to answer by basing their thoughts on several general characteristics as follows: 19 a. The critical legal studies criticizes law that is full and dominant with certain ideologies. b. This critical stream of legal studies criticizes the applicable law which in fact siding with politics and law as such it is not neutral at all. c. The critical legal studies has a great commitment to individual freedom with certain limitations. Because of this, this philosophy has a lot to do with human emancipation. d. The critical legal studies lacks of trust in abstract forms of truth and truly objective knowledge. Because of this the critical legal studies strongly rejects teaching in the legal positivism flow. e. The critical legal studies rejects the act between theory and practice also rejects the difference between fact and value, which is a characteristic of liberalism. Critical legal studies reject the notion of traditional law experts, namely: the law is objective, meaning that reality is a legal foothold, the law is certain, meaning that the law provides a definite and understandable answer, the law is neutral, meaning that it does not favor certain parties, and the law is autonomous, meaning that it is not influenced by politics or other sciences. Furthermore, critical legal flow tells some basic concepts as follows: 20 a. Critical legal studies rejects liberalism. b. Critical legal studies presents the contradictions between individuals and other individuals as well as with the community. c. Critical legal studies of do a legitimation in the community so far, reinforced by the principles of hegemony and reification, actually strengthens the oppression of the strong / powerful against the weak. d. Critical legal studies rejects the life model of liberal society which is actually more of an engineering or falsehood that is strengthened by the legal sector. This principal is trying to overhaul the legal reasoning system that is full of falsehood. e. Critical legal opinion argues that legal doctrine is something that is uncertain and full of contradictions, so it can be interpreted arbitrarily by the interpreter. f. Because of the uncertain nature of legal doctrine, this school uses a more historical, socio-economic, and psychological model of legal analysis and interpretation. g. Critical legal thinking holds that juridical analysis obscures the real reality, which gives birth to a ruling as if it is just and legitimate. h. There is no neutral interpretation of a legal doctrine, but the interpretation is always subjective and political. The flow of critical legal studies holds that the law is certain. Therefore, the law is contradictory internally and the same provisions can always be interpreted or applied differently and even contradict each other.
The Realization of Social Justice for the Poor Citizens in Legal
Philosophy ISSN 1978-5186 Volume 12 Number 4, October-December 2018 293 The principle of social justice for all Indonesian people contains values that every legal regulation, both law and court decision reflects the spirit of justice. Social justice in this 5th principle implies that all people have equality before the law. Rich or poor people have the same rights and obligations, and also have the same position in court. This justice must be felt by most Indonesian people, not by a handful of certain groups.
But in reality the realization of social justice for all Indonesian people is still being done. The realization of social justice, especially for the poor, is still very minimal. Many cases of injustice have happened to them. The following are examples of cases: a. The case of theft of flip-flops by AAL, Brigadier Rusdi Harahap as law enforcement officers who directly accused AAL and carried out vigilante acts and treated AAL arbitrarily. AAL and his friends were beaten, kicked, punched and even held captive by Brigadier Rusdi Harahap. AAL was decided by the Palu Court, Central Sulawesi that he was formally found guilty even though the sandals that were being used as evidence were not "eiger" brand sandals alleged by Brigadier Rusdi Harahap. The sandals stolen by AAL are "Ando" not "Eiger" which is meant by Brigadier Rusdi Harahap. So that the sandals taken by AAL are menless sandals. When taking sandals that were not manned it was like taking a fish in the sea. AAL should not be found guilty. 21 b. The case of Asyani (63), she was charged with stealing seven stems of teak trees claimed by Perhutani in her home in the village of Jatibanteng Situbondo, East Java. In the trial at the Situbondo District Court, Asyani was charged with Article 12 letters c and d jo article 83 paragraph (1) letter a of Law No. 18 of 2003 concerning Prevention and Eradication of Forest Destruction (P3H), with the threat of at least 1 year and a maximum of 5 years in jail. Asyani felt that she did not steal Perhutani's wood, the wood she cut was her wood which had been around her house for decades. 22 c. Cases of poor people in Bojonegoro, Supriyono and Sulastri. They are husband and wife. They were tried in court without having a strong legal basis and were threatened with a sentence of seven (7) was no evidence Supriyono-Sulastri had stolen a bunch of bananas, the police and prosecutors continued to process the case. This case has also been reconciled at the level of the Neighborhood Unit (RT) and village witnessed by the police. However, the husband and wife remain on the table. 23 d. Minah, who is working in the field, picking three cacao fruits belonging to PT Rumpun. The cocoa tree is about 165 centimeters tall. Minah, which is 160 centimeters tall, slightly tiptoe when picking it. After picking three cacao fruits with her bare hands, Minah put them under the tree. While looking, one by one the cocoa fruit was skinned. That's when Sutarno, the plantation foreman, arrived. Sutarno interrogates Minah. According to Sutarno, Minah claimed to take the cocoa. Minah apologizes to Sutarno while crying. But Sutarno still reported Minah's actions to PT Rumpun. PT Rumpun's manager then reported Minah's actions to the Ajibarang Sector Police. After that, Minah was summoned by the police, prosecutors and the court, three times each. Until the 1-month and 15-day sentence of imprisonment was dropped: Minah was proven to steal three cacao fruits, which cost only around Rp. 2,100. 24 However, if these cases are compared to corruption cases, for example the case regarding the corruption case of the Chairman of the Bengkalis DPRD, Heru Wahyudi. The Panel of Judges of the Pekanbaru District Court, Riau, handed down a mild verdict against the Chairman of the Bengkalis DPRD, Heru Wahyudi. Even though he was found guilty of corruption in social assistance funds, he was only sentenced to 18 months in prison. This verdict is very contrary to the demands of the prosecutor. The prosecutor sued Heru with a sentence of eight years and six months in prison and a fine of Rp. 500 million, 6 months in prison. 25 If it is seen from the results of the decision, according to the author, this is very unfair for the poor. Viewed through the philosophy of law, especially the Critical Legal Studies. In this principle, critical legal studies reject the notion of traditional lawyers, namely: law is objective, meaning that reality is a legal ground, that law is certain, meaning that the law provides definite and understandable answers, the law is neutral, meaning that it is impartial to certain parties. and the law is autonomous, meaning that it is not influenced by politics or other sciences. The critical legal studies holds that the law is certain. Therefore, the law is contradictory internally and the same provisions Bui, https://www.liputan6.com/regional/read/2974957/kasus-korupsi-rp-31-m-ketua-dprdbengkalis-divonis-15-tahun-bui, 29 Agustus 2018. can always be interpreted or applied differently and even contradict each other.
This Critical Legal Studies is very appropriate according to the author with what is happening in law in reality these days. That the law is no longer neutral, but the law is certain because the applicable law which in fact takes sides with politics and law as such is not neutral at all. It can be seen through the results of the judge's decision that decided on cases of corruption and theft committed by the poor people. Judges' decisions are not commensurate with the differences in deeds they have committed. Corruption is an act that is quite detrimental to the community and the state and corruption cases should not be sentenced with a light sentence while the punishment for poor people whose actions are only small violation is being punished by a strict punishment.
So according to the author the realization of social justice for the poor citizens seen from the philosophy of law specifically seen from the Critical Legal Studies. That the author feels it is very true that the law is no longer neutral so that the legal objectives regarding justice are not realized, especially justice for the poor.
D. Conclusion
The values of Pancasila contains justice where everyone should be treated equally regardless of their position in particular especially in enforcement of law by the judge in court. When judges made their decision, they should not see the defendant based on their position. However, this is principal not implemented in accordance with expectations and conditions of Indonesia society because, in many cases there are injustice happening in Indonesia especially against the poor citizens, many of them are terminated with severe penalty as compared with the corruptor that had caused many disadvantage to the people of Indonesia.
So according to the Author law is not neutral anymore. The law in fact disinterested and not purely about the law. This is apparent from the verdict against the corruptor and theft by poor citizens. The Government should be more aware of what really happened in the court especially judges when they made their decision they should not see the defendant based on the position and be fair. They should reflects justice as in the theory of Justice it's called commutative justice, where judges abstain in matters, they decide based on their mistakes.
Based on what has been described in this journal, then the goal is about the realization of social justice for poor citizens in the philosophy of law. Additionally the author expects that social justice can be done indiscriminately before the law. Thus, the authors suggest to readers especially citizens of Indonesia and Government especially law enforcement officers, to be more assertive and fair in applying social justice, in terms of handling as well as the overthrow of punishment that should be as fair as possible may be applied according to the causes that the defendant has done. With appropriate penalties obtained according the results of the existing losses. Recommendation for law enforcers are not lame in seeing legal issues that occur and they supposed to know the issues that very harmful for our country, so that later on in the overthrow of punishment justice can be done with impartial to the rich people only. So that social justice not only mere wishful thinking | 5,716.2 | 2018-12-31T00:00:00.000 | [
"Law",
"Philosophy",
"Political Science"
] |
Glycemic Variability and Oxidative Stress: A Link between Diabetes and Cardiovascular Disease?
Diabetes is associated with a two to three-fold increase in risk of cardiovascular disease. However, intensive glucose-lowering therapy aiming at reducing HbA1c to a near-normal level failed to suppress cardiovascular events in recent randomized controlled trials. HbA1c reflects average glucose level rather than glycemic variability. In in vivo and in vitro studies, glycemic variability has been shown to be associated with greater reactive oxygen species production and vascular damage, compared to chronic hyperglycemia. These findings suggest that management of glycemic variability may reduce cardiovascular disease in patients with diabetes; however, clinical studies have shown conflicting results. This review summarizes the current knowledge on glycemic variability and oxidative stress, and discusses the clinical implications.
Introduction
The number of people with diabetes are continuously increasing all over the world. People with diabetes are at two to three-fold increased risk of developing cardiovascular disease (CVD), and CVD remains the major cause of death in patients with diabetes [1,2].
In the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) and the UK Prospective Diabetes Study (UKPDS), intensive glycemic OPEN ACCESS control has been shown to reduce the development of CVD as well as diabetic microangiopathy during long-term follow up in patients with both type 1 (T1DM) and type 2 diabetes (T2DM), so-called legacy effects [3,4]. On the other hand, recent clinical trials aiming at reducing HbA1c to a near-normal level in patients with T2DM have failed to show an additional benefit on CVD outcomes [5][6][7]. In the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial, the trial was stopped because of a significant increase in all-cause mortality in patients with T2DM who were randomized to the intensive glycemic control group [5].
A U-shaped association between HbA1c and CVD or all-cause mortality in patients with diabetes has been also reported by Currie et al. [8]. These findings suggest that normalization of HbA1c does not necessarily improve CVD outcomes, and that other factors are associated with these outcomes independent of HbA1c.
Glycemic variability (GV) has been proposed as one of the factors associated with CVD outcomes in patients with and without T2DM. This review summarizes current knowledge of GV and its association with oxidative stress and CVD, and discusses its clinical implication in the treatment of diabetes.
Assessment of Glycemic Variability
Normally, plasma glucose levels are kept within a narrow range, of 80-120 mg/dL, throughout the day in people with normal glucose tolerance (NGT) (Figure 1) [9]. Once glucose intolerance develops, glycemic swings become greater. One of the reasons for confusion on GV is that there are various definitions or concepts of GV. In general, GV refers to intra-day GV or day-to-day GV, but it also may refer to visit-to-visit GV over months to years (Table 1). While glucose values measured by self-monitoring of blood glucose (SMBG) or continuous glucose monitoring (CGM) are usually used to assess intra-day or day-to-day GV, fasting plasma glucose (FPG) and HbA1c are also used to assess visit-to-visit GV. Furthermore, although GV usually refers to overall glycemic variation including hyperand hypoglycemia, GV is often also used to refer to postprandial glycemic excursion, especially in patients with T2DM. Thus, data interpretation should take into account different definitions and concepts of GV.
There are also various indices of GV, as summarized in Table 1. Standard deviation (SD) is most widely used to assess GV. Since SD is also positively associated with the mean value, the coefficient of variance (CV; SD divided by mean × 100 (%)) is also used to assess GV. Mean amplitude of glycemic excursions (MAGE) [10] and M value [11] are also frequently used. MAGE reflects relatively large glycemic excursions, but the measurement of MAGE may be subjective in terms of selection of large glycemic excursions. M value reflects glycemic variation from the baseline "ideal" value usually set at 90 to 120 mg/dL; thus, M value is also affected by the mean value. Continuous overlapping net glycemic action (CONGA) provides a precise measurement of within-day GV but requires CGM data for its calculation [12]. Mean of Daily Differences (MODD) [13] is used as an index of day-to-day GV. Although many other indices to assess GV have also been developed and each index reflects different aspects of GV, these indices of GV are largely correlated with each other and SD appears to remain the gold-standard index of GV [14,15]. Daily glucose profile in healthy subjects assessed by continuous glucose monitoring. The central line is the mean, and the two outer lines represent the 5th and 95th percentiles (P5 and P95, respectively). Arrows indicate the times of three meals during a day. Reproduced with permission from the American Diabetes Association [9]. Serum 1,5-anhydroglucitol (1,5-AG) is a marker of postprandial hyperglycemia. 1,5-AG is excreted into the urine but is normally fully reabsorbed by the renal tubules. When plasma glucose level rises (generally >180 mg/dL, the average renal threshold for glucose) and glucosuria occurs, serum 1,5-AG falls due to competitive inhibition of renal tubular reabsorption by glucose. Thus, a lower serum 1,5-AG level reflects higher glycemic excursion, which usually occurs after meals [16]. However, serum 1,5-AG level becomes too low to be used as a marker of glycemic excursion when marked hyperglycemia persists (e.g., HbA1c > 8%). Also, since treatment with acarbose interferes with and decreases serum 1,5-AG level independent of the plasma glucose level, serum 1,5-AG level may not accurately reflect glycemic excursion in patients treated with acarbose [17,18].
Glycated albumin (GA) is another marker of glycemic control. GA reflects average plasma glucose level over the past one to two weeks, while HbA1c reflects average plasma glucose level over the past one to two months [19]. Since albumin is more readily glycated compared with hemoglobin, GA also more sensitively reflects glycemic excursions and postprandial hyperglycemia compared with HbA1c [20]. We have reported that the ratio of GA to HbA1c was significantly correlated with postprandial plasma glucose level but not with fasting plasma glucose level, indicating that GA to HbA1c ratio reflects postprandial glucose excursion independently of fasting glucose level ( Figure 2) [21]. and ∆PG (i.e., PPG-FPG) (C,F), while GA was more strongly correlated with PPG and ∆PG than was HbA1c. There was a significant positive correlation between PPG (H) and ∆PG (I) and GA/HbA1c ratio, but no correlation between FPG and GA/HbA1c ratio (G). Reproduced with permission from the Japan Diabetes Society [21].
Glycemic Variability and Vascular Events
In DCCT, it was reported that the incidence of diabetic complications was lower in subjects with intensive therapy using basal-bolus therapy or continuous subcutaneous insulin infusion (CSII) compared with those with conventional therapy, even with the same HbA1c levels [22]. This finding raised the hypothesis that a reduction in GV by intensive therapy might contribute to reduce the incidence of diabetic complications. However, the DCCT investigators recently reanalyzed the data and denied their previous conclusion [23]. Subsequent analyses of the DCCT data also found that there was no association between GV and micro-and macrovascular complications [24][25][26]. In DCCT, GV was assessed as SD and MAGE based on 7-point SMBG throughout the day.
On the other hand, epidemiological studies have consistently shown a significant association between postprandial hyperglycemia, but not fasting hyperglycemia, and incidence of cardiovascular events and all-cause mortality in the general population [27][28][29]. In these studies, postprandial hyperglycemia was assessed as the 120 min post-load plasma glucose level during 75 g oral glucose tolerance test (OGTT); however, it was also reported that lower 1,5-AG levels were associated with the development of CVD in a general population [30]. Esposito et al. reported that the postprandial incremental glucose peak that occurs mostly within 1 h after a meal was related to carotid intima-media thickness (IMT) in patients with T2DM [31]. In the San Luigi Gonzaga Diabetes Study, a 14-year follow-up of 505 patients with T2DM, HbA1c and 2 h postprandial blood glucose level after lunch were significantly associated with CVD events and mortality [32].
Recently, using CGM, Torimoto et al. have reported that reactive hyperemia index (RHI), an index of vascular endothelial function, was significantly correlated with SD and MAGE in patients with T2DM [33]. Notably, RHI was also correlated with the percentage of the time having hypoglycemia. The same group also reported a significant correlation between RHI and 1,5-AG [34]. Thus, these observational studies indicated an association between GV or postprandial glycemic excursion and the development of atherosclerosis in patients with T2DM. On the other hand, in the A1C-Derived Average Glucose (ADAG) study, Borg et al. reported that GV assessed by either SMBG or CGM did not significantly associate with known CVD risk factors in both T1DM and T2DM patients, while mean glucose values and HbA1c did [35]. In another study using CGM, Sartore et al. reported that there was a significant association between SD or CONGA-2 and the presence of retinopathy in patients with diabetes, although the correlation between GV and retinopathy was not significant in multivariate analysis [36].
In addition to daily GV, recent studies suggest that GV assessed over a longer period (i.e., months to years) is also associated with vascular complications. A number of studies have reported that variability of fasting plasma glucose and HbA1c are associated with higher risk of development of retinopathy and nephropathy in patients with type 1 and type 2 diabetes . They assessed visit-to-visit GV as the SD of five measurements of FPG or HbA1c during the first 3-24 months of the study. As a result, visit-to-visit GV of FPG and HbA1c were significantly associated with micro-and macrovascular events and mortality: highest vs. lowest tenth hazard ratio (95% confidence interval (CI)) 1.64 (1.05-2.55) for vascular events and 3.31 (1.57-6.98) for mortality. These results suggest that GV during a longer period is an important risk factor for vascular events and mortality, the impact of which may be equal to or even more than that of intraday GV.
Preclinical Studies
Reactive oxygen species (ROS) and oxidative stress are increased under hyperglycemia. It is hypothesized that oxidative stress and diabetic complications are linked through several pathways: (1) the polyol pathway; (2) hexosamine pathway; (3) protein kinase C activation; and (4) formation of advanced glycation end-products (AGEs) [44]. Studies have suggested that intermittent hyperglycemia rather than chronic hyperglycemia exaggerates the production of ROS.
In in vitro studies, it has been reported that intermittent high glucose levels (5 and 20 mmol/L every 24 h) stimulated ROS overproduction, which led to increased cellular apoptosis in human umbilical vein endothelial cells compared with that in a stable high glucose environment (20 mmol/L) [45][46][47].
In vivo, Horvath et al. examined the effect of "glycemic swings" on oxidative stress and endothelial function in streptozotocin-induced diabetic rats [48]. They treated the diabetic rats with either intermediate-insulin (insulin glargine) once daily to achieve steady normalization of blood glucose levels or long-acting insulin (ultralente insulin) once every other day to induce "glycemic swings" for 14 days. They found that diabetic rats with "glycemic swings" showed higher nitrotyrosine levels and endothelial dysfunction compared with rats with steady normalization of blood glucose. Endothelial dysfunction in rats with "glycemic swings" was even more pronounced than that in untreated diabetic rats. On the other hand, Di Flaviani et al. reported that there was a significant association between CONGA-2 and urinary 8-iso-PGF2α excretion in 26 patients with T2DM treated with diet and/or metformin [53]. In this study, a significant association between CONGA-2 and left ventricular mass index (LVMI) was also reported. In addition to the differences in patients' characteristics and medications, different methods of oxidative stress measurement (e.g., ELISA vs. tandem mass spectrometry for 8-iso-PGF2α measurement) may have caused these conflicting results among studies [54].
Dyslipidemia
In addition to hyperglycemia-induced oxidative stress, postprandial dyslipidemia, mainly triglyceridemia, may also contribute to vascular damage and CVD [55].
Glycated Albumin
GA has not only been shown to be a marker of GV, but GA itself has also been postulated to promote atherosclerosis [56]. Since glycation of albumin impairs the antioxidant activities of albumin, GA may contribute to increased oxidative stress in patients with diabetes [57]. GA or GA to HbA1c ratio, but not HbA1c, was associated with carotid IMT or plaque, and severity of coronary atherosclerosis [58][59][60][61].
Recently, the predictive value of GA for vascular complications has been evaluated in two large prospective cohorts of DCCT and the Atherosclerosis Risk in Communities (ARIC) study [62,63]. As a result, GA has been shown to have similar predictive value to HbA1c for the development of microvascular complications in patients with T1DM and T2DM, and a combination of both improved the predictive value; however, the association between GA and the development of CVD was not confirmed in DCCT.
Hypoglycemia
Hypoglycemia is another possible link between GV and poorer CVD outcomes. It has been reported that greater GV predicted more frequent hypoglycemic events, including severe hypoglycemia requiring assistance from a third person, in patients with T1DM [64][65][66]. In the ACCORD study, all-cause mortality was higher in the intensive glucose control group compared with the conventional treatment group [5]. The incidence of all and severe hypoglycemia was significantly higher in the intensive glucose control group, although the investigators concluded that hypoglycemia was not a cause of increased death in the intensive therapy group [67].
Nonetheless, severe hypoglycemia has been shown to predict all-cause mortality in patients with T2DM [67][68][69]. Although whether hypoglycemia is causative of death or a coincidental marker of illness has not been fully elucidated, hypoglycemia could induce the onset of CVD through induction of (1) inflammation; (2) blood coagulation abnormality; (3) sympathoadrenal response and (4) endothelial dysfunction [70]. Recent studies suggest that hypoglycemia is associated with impaired cardiovascular autonomic function and an increased risk of arrhythmia, especially nocturnal arrhythmia, which may lead to a sudden death, so called "dead-in-bed" syndrome [71][72][73][74][75].
Ceriello et al. recently reported the effect of hyperglycemia after hypoglycemia under a glucose clamp in healthy subjects and subjects with T1DM. Hypoglycemia induced for 2 h increased oxidative stress and worsened endothelial function, which was further worsened by hyperglycemia after hypoglycemia [76].
Effects of Treatment of Glycemic Variability on Oxidative Stress and Cardiovascular Outcomes
As described above, in vitro and in vivo animal studies and observational and experimental human studies indicate that oxidative stress is a plausible link between GV and CVD. However, the results of intervention studies are more conflicting.
The Hyperglycemia and its Effect after Acute myocardial infarction on cardiovascular outcomes in patients with Type 2 Diabetes mellitus (HEART2D) trial is to date the only study to directly compare the effects of postprandial vs. fasting glycemic control on CVD outcome [77]. A total of 1115 patients with T2DM who had had an acute myocardial infarction within 21 days were assigned to either a prandial strategy group or basal strategy group. The patients in the prandial strategy group were treated with three pre-meal insulin lispro, a rapid-acting insulin analog, injections with a target of 2 h postprandial blood glucose <7.5 mmol/L (135 mg/dL) and those in the basal strategy group were treated with neutral protamine Hagedorn (NPH) insulin twice daily or insulin glargine, a long-acting insulin analog, once daily with a target of fasting/pre-meal blood glucose <6.7 mmol/L (121 mg/dL). The trial was stopped because of lack of efficacy, with a mean follow-up period of 963 days. During the study, HbA1c was similarly reduced in both groups, and the prandial strategy group showed a lower daily mean postprandial blood glucose compared with the basal strategy group. There was no significant difference in incidence of first CVD event between the groups (hazard ratio 0.98, 95% CI 0.8-1.21). The incidence of all and of severe hypoglycemia were similar between the groups. However, post-hoc subgroup analysis showed that in subjects aged >65.7 years (n = 399), fewer CVD events were observed in the prandial strategy group compared with the basal strategy group (hazard ratio 0.69, 95% CI 0.49-0.96, p = 0.03) [78].
The results of this study were criticized in view of the fact that the difference in postprandial blood glucose level between the groups was less than expected (0.8 mmol/L (14.4 mg/dL) compared with the expected goal of 2.5 mmol/L (45 mg/dL)). Improvement of the management of other coronary risk factors might also have affected the results. Nonetheless, it should be acknowledged that this study was designed to compare two different insulin regimens rather than to clarify the role of GV in CVD outcome.
It has been reported that a similar reduction of oxidative stress was obtained by nine days of treatment with either inhaled mealtime insulin or basal insulin in patients with T2DM [79]. Moreover, Monnier et al. reported that urinary 8-iso-PGF2α level was decreased by insulin treatment but not OAD treatment in patients with T2DM [80], suggesting that insulin therapy itself may induce an anti-oxidative effect independently of hyperglycemia or GV, and this anti-oxidative effect of insulin might have affected the results of the HEART2D trial.
α-Glucosidase inhibitors (AGI) slow absorption of glucose from the intestine, resulting in suppression of postprandial glucose excursion. In the Study to Prevent Non-Insulin-Dependent Diabetes Mellitus (STOP-NIDDM) study, treatment with acarbose reduced the incidence of T2DM in patients with impaired glucose tolerance (IGT) [81]. In this study, a reduction in CVD in patients treated with acarbose was also reported [82]. A reduction in CVD by treatment with acarbose was also observed in patients with T2DM [83], suggesting the importance of reduction in postprandial excursion to prevent CVD in patients with IGT and T2DM.
Glinides are rapid-acting insulin secretagogues, thereby reducing postprandial glucose excursion similarly to AGIs. Esposito et al. reported that treatment with repaglinide reduced postprandial hyperglycemia compared with the effect of glyburide, a sulfonylurea, and more patients treated with repaglinide showed regression of carotid IMT compared with those treated with glyburide [84]. Mita et al. also reported that treatment with nateglinide resulted in regression of carotid IMT compared with that in untreated patients with T2DM [85].
On the other hand, in the Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) trial, treatment with nateglinide did not reduce the incidence of either T2DM or CVD [86]. Differences in patients' characteristics and an insufficient dose of nateglinide used in the study might have affected these results. However, the different results between the STOP-NIDDM and NAVIGATOR trials may be associated with the different mechanisms of action of the two drugs. As AGIs slow glucose absorption, postprandial insulin secretion is reduced. On the other hand, glinides suppress postprandial glucose excursion through increasing early phase insulin secretion after meal ingestion. Recently, Sawada et al. have reported a comparison of the effects on oxidative stress and endothelial function between a glinide and an AGI [87]. In this study, a total of 104 patients with T2DM were randomly assigned to treatment with miglitol or nateglinide. After 4 months of treatment, despite similar improvement of HbA1c and 1,5-AG in both groups, a reduction in oxidative stress assessed by diacron reactive oxygen metabolites (d-ROMs) and improvement of percent flow-mediated dilatation (%FMD) were observed only in the patients treated with miglitol, accompanied by improvement of insulin resistance and lipid profile, suggesting that treatment of postprandial glucose excursion without stimulating insulin secretion may be preferable to ameliorate endothelial dysfunction through a reduction in oxidative stress. Whether endogenous and exogenous insulin have different effects on oxidative stress remains unknown.
We have also recently reported the effects of mitiglinide on GV and oxidative stress markers in patients with T2DM [88]. Treatment with mitiglinide for 4 months significantly improved 1,5-AG and daily GV assessed by 7-point SMBG, but there was no change in plasma oxidized low-density lipoprotein (oxLDL), plasma pentosidine, urinary excretion of 8-hydroxydeoxy guanosine (8-OHdG) or 8-iso-PGF2α after treatment. On the other hand, Wang et al. have reported that treatment with nateglinide for 4 weeks significantly improved insulin resistance, oxidative stress (nitric oxide, malondialdehyde and superoxide dismutase) and endothelial dysfunction in patients with newly diagnosed T2DM [89].
Dipeptidyl peptidase 4 (DPP-4) inhibitors increase insulin secretion and suppress glucagon secretion in a glucose-dependent manner through enhancement of the effect of endogenous incretin, thereby improving postprandial glycemic excursion without increasing the risk of hypoglycemia. Thus, treatment with DPP-4 inhibitors is expected to improve GV [90,91]. Rizzo et al. compared the effect on GV and oxidative stress between two different DPP-4 inhibitors, sitagliptin and vildagliptin, in patients with T2DM [92]. After 12 weeks of treatment, HbA1c was similarly improved in both groups; however, MAGE was significantly lower in patients treated with vildagliptin compared with those treated with sitagliptin. Oxidative stress assessed by nitrotyrosine and inflammatory markers (interleukin (IL)-6 and IL-18) was significantly lower in the vildagliptin group, and there was a significant correlation between nitrotyrosine and change in MAGE but not HbA1c. The same researchers also reported that weight loss after bariatric surgery resulted in a reduction in MAGE and plasma nitrotyrosine levels accompanied by increased glucagon-like peptide 1 (GLP-1) level in patients with T2DM, although no change in MAGE and plasma nitrotyrosine levels was observed after weight loss by dieting [93].
These favorable changes in GV without an excess risk of hypoglycemia by treatment with DPP-4 inhibitors are expected to result in improvement of CVD outcome. However, recently two randomized controlled trials failed to show a beneficial effect of DPP-4 inhibitors on CVD outcome [94,95].
GLP-1 receptor agonists (GLP-1RA) also lead to a supraphysiological activation of GLP-1 receptor and improve GV. Ceriello et al. reported that GLP-1 infusion improved oxidative stress assessed by plasma nitrotyrosine and 8-iso-PGF2α and endothelial dysfunction induced by hyper-and hypoglycemia in patients with T1DM [96]. This effect was further augmented by concomitant vitamin C infusion [97]. They also reported additive beneficial effects of a combination of GLP-1 and insulin on hyperglycemia-induced oxidative stress and endothelial dysfunction in patients with T2DM [98]. However, this effect of GLP-1 was independent of GV since the same plasma glucose level was maintained during the study. Kelly et al. reported that there was no significant change in oxidative stress (oxLDL) and endothelial function assessed by RHI between baseline and after 3 months of treatment with either exenatide or metformin in patients with prediabetes [99].
Thus, to date, the causative role of GV in oxidative stress and CVD in patients with diabetes remains controversial. Finally, it was reported that antioxidant vitamin supplementation did not improve CVD outcome and all-cause mortality in the general population and patients with diabetes [100,101].
Factors Associated with Glycemic Variability
For the management of GV, it is important to clarify factors associated with GV in patients with diabetes. Factors that associate with GV are summarized in Table 2. A defect of beta cell function is a hallmark of both type 1 and type 2 diabetes [102,103]. An inverse relationship between residual C-peptide level and GV has been reported in patients with T1DM [104][105][106]. We and others reported that an inverse relationship between beta cell function and GA to HbA1c ratio was observed in patients with T2DM [21, [107][108][109][110] (Figure 3A), suggesting that less beta cell function is associated with greater postprandial glycemic excursion or GV in patients with T2DM. Interestingly, the relationship between beta cell function assessed by serum C-peptide and GA to HbA1c ratio was comparable between patients with type 1 and type 2 diabetes [21] (Figure 3B), suggesting that the relationship between serum C-peptide and GV is independent of the type of diabetes. Kramer et al. recently reported the effects of intensive insulin therapy on GV and beta cell function in 61 patients with early-stage T2DM [111]. Intensive insulin therapy for 4 weeks reduced GV assessed by 6-point SMBG, and the reduction in GV was significantly associated with improvement of beta cell function.
A B
Age is another factor associated with GV. Munshi et al. reported that the proportion of postprandial hyperglycemia in total hyperglycemia was greater in older (≥65 years) patients with T2DM compared with younger (<65 years) patients [112]. We have also reported that SD and MAGE assessed by CGM were significantly associated with age in patients with T2DM ( Figure 4). The progressive decline in beta cell function with duration of diabetes, impairment of multiple organs such as liver and kidney, reduction in lean mass, and autonomic neuropathy may be attributable to the relationship between age and GV. In addition, polypharmacy and cognitive impairment/dementia, resulting in poor compliance with medication, may also contribute to greater GV in the elderly. A significant inverse association between MAGE and cognitive function has been reported in older patients with T2DM [113]. Therefore, the increased risk of hypoglycemia in older patients with T2DM [114,115] is at least in part due to greater GV in this population.
Importance of Preventing Hypoglycemia
Although the relationship between GV and progression of atherosclerosis remains to be established, importantly, it has been reported that GV is associated with hypoglycemia, especially severe hypoglycemia [64][65][66]. Severe hypoglycemia can not only directly cause death, but also predict higher risk of mortality [67][68][69]. Hypoglycemia may also increase a risk of CVD or severe arrhythmia, as described above. Thus, clinically, minimizing GV is important not only for prevention of CVD but also for prevention of hypoglycemia in the management of diabetes ( Figure 5).
Treatment Strategy to Minimize Glycemic Variability
As described above, beta cell function is one of the major factors associated with GV. Moreover, beta cell function progressively deteriorates during the course of the disease [117][118][119]. Therefore, to appropriately manage GV, it is important to preserve and recover beta cell function.
To date, a reduction in beta cell workload appears the most effective approach to preserve or recover beta cell function [103,118]. Thus, use of biguanides and/or thiazolidinediones is recommended to preserve beta cell function without increasing the risk of hypoglycemia, if it is not possible to achieve the glycemic goal with lifestyle modification (Figure 6).
To specifically reduce postprandial glycemic excursion, AGI and glinides are recommended. Since sulfonylureas (SUs) do not effectively reduce GV, and increase the risk of hypoglycemia [120,121], their use may be somewhat limited, especially in older patients. In this case, glinides are recommended as a substitute for SUs. In addition to general lifestyle modification, selection of food with lower glycemic index, increasing dietary fiber and implementing a brisk walk after meals are recommended to reduce postprandial glycemic excursion [122,123]. . Proposed concept of treatment strategy for type 2 diabetes (T2DM) in relation to functional beta cell mass. An α-glucosidase inhibitor is partly approved for use in patients with impaired glucose tolerance (IGT) in Japan. Medications not approved or marketed in Japan are not included in the figure. Since currently no single therapy or agent can cure or even manage T2DM, an effective and individualized combination of current medications in addition to lifestyle modification aiming at reduction in beta cell workload is important to preserve or recover beta cell function, which may lead to a reduction in risk of severe hypoglycemia.
As incretin drugs, i.e., DPP-4 inhibitors and GLP-1RAs, act in a glucose-dependent manner, the use of these drugs also improves GV [121,124]. Short-acting GLP-1RAs seem to more effectively reduce postprandial glycemic excursion compared with long-acting GLP-1RAs, through inhibiting gastric emptying without enhancing insulin secretion [125]. Sodium glucose cotransporter 2 (SGLT2) inhibitors, which have recently been approved in several countries, also reduce GV without enhancing insulin secretion [126,127].
Insulin therapy is eventually needed in most patients with T2DM. Rapid-acting insulin analogs such as insulin aspart, insulin lispro and insulin gluligine are used to control postprandial hyperglycemia, although the use of prandial insulin results in a higher risk of hypoglycemia and weight gain compared with the use of basal insulin [128].
Conclusions
This review summarizes the current knowledge on the associations among GV, oxidative stress and CVD, especially focusing on clinical evidence, and discusses their clinical implications in the management of GV.
In vitro and in vivo animal studies indicate that GV, compared with chronic hyperglycemia, promotes excess oxidative stress and worsens cellular and vascular damage. In experimental settings, several studies have also confirmed these findings in humans.
However, the effect of GV on oxidative stress in clinical settings remains controversial, and to date no solid evidence exists. Few studies have demonstrated the effect of daily GV on CVD outcomes, although visit-to-visit GV, a longer term index, may predict worse CVD outcome in patients with diabetes. In contrast, the association between postprandial hyperglycemia and CVD has been consistently reported in the general population. Different definitions or concepts (e.g., glycemic variability vs. postprandial glucose excursions) of GV further complicate the interpretation of the data, although they are correlated with each other and not able to be completely separated. In addition, the conflicting findings among the numerous studies may be derived from differences in study design, intervention period, baseline patient characteristics, use of different medications and use of different oxidative stress markers. Although plasma or urinary 8-iso-PGF2α is widely used for assessment of oxidative stress, the two different methods of measurement (ELISA and tandem mass spectrometry) may affect the findings.
These conflicting results may also suggest that the effect of daily GV on oxidative stress or CVD is, if anything, minimal to modest in patients with diabetes. It has been shown that multifactorial intervention for coronary risk factors effectively reduces the incidence of CVD and all-cause mortality [129]. Antihypertensives and statins have also been reported to reduce oxidative stress [130][131][132][133]. Thus, to improve CVD outcomes and mortality in patients with diabetes, a multifactorial approach appropriately managing each coronary risk factor is necessary.
Nonetheless, the association between GV and hypoglycemia is clinically important and thus GV should be minimized to prevent hypoglycemia in clinical settings. Since GV becomes greater when beta cell function decreases, treatment of patients with diabetes should focus on preservation or recovery of beta cell function. Also, since GV becomes greater with age, older patients need to be treated with special caution in regard to hypoglycemia.
In conclusion, further research including prospective trials to explore the mechanisms linking GV and CVD is warranted. Although the relationship among GV, oxidative stress and CVD has not been established, continuous efforts to minimize GV will prevent hypoglycemia and improve QOL in patients with diabetes, and hopefully lead to the improvement of CVD outcome in patients with and without diabetes. | 7,011.2 | 2014-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Possible Role of the Novel Cytokines IL-35 and IL-37 in Inflammatory Bowel Disease
Interleukin- (IL-) 35 and IL-37 are newly discovered immune-suppressing cytokines. They have been described in inflammatory diseases such as collagen-induced arthritis and asthma. However, their expressions in inflammatory bowel disease (IBD) patients have not been yet explored. Our aim was to evaluate serum and inflamed mucosal levels in IBD patients. In 20 ulcerative colitis (UC) patients, 7 Crohn's disease (CD) patients, and 15 healthy subjects, cytokine levels in serum were determined using ELISA and mucosal expression studies were performed by immunohistochemistry, quantitative real-time PCR, and Western blot. The results showed that serums IL-35 and IL-37 levels were significantly decreased in UC and CD patients compared with healthy subjects. The cytokines levels correlated inversely with UC activity. IL-35 was expressed in infiltrating immune cells while IL-37 in intestinal epithelial cells as well as inflammatory cells. IBD patients had significantly higher Ebi3, p35 (two subunits of IL-35), and IL-37b gene expressions; IL-35 and IL-37 protein expressions were higher in IBD patients compared with controls. The study showed that serums IL-35 and IL-37 might be potentially novel biomarkers for IBD. Intestinal IL-35 and IL-37 proteins are upregulated, suggesting that regulating the expression of the two cytokines may provide a new possible target for the treatment of IBD.
Introduction
Inflammatory bowel disease (IBD), including Crohn's disease (CD) and ulcerative colitis (UC), is a kind of chronic inflammatory disorder of the gastrointestinal tract. Although the etiology is not completely understood, initiation and exacerbation of the inflammatory process seem to be due to a massive local mucosal immune response [1]. Analysis of immunoinflammatory pathways in the gut of patients with UC or CD has shown that tissue damage is driven by a complex and dynamic crosstalk between immune and nonimmune cells and that cytokines are key mediators of this interplay [2,3].
Interleukin-(IL-) 35 and IL-37 are newly discovered immune-suppressing cytokines. IL-35 belongs to the IL-12 family, which contains IL-12, IL-23, and IL-27. It is composed of two subunits, Epstein-Barr virus-induced gene 3 (Ebi3) and p35 (IL-12a) [4]. IL-35 was shown to be secreted by Foxp3+CD4+CD25+ regulatory T cells (Tregs) in mice or a regulatory T cell population induced by IL-35 [5] and CD138+ plasma cells in experimental autoimmune encephalomyelitis (EAE) [6]. Using experimental database mining and statistical analysis methods, Li et al. reported that IL-35 is not constitutively expressed in human tissues but it is inducible in response to inflammatory stimuli [7]. IL-37, also known as IL-1F7, is a new member of the IL-1 family, which shares common characteristic symbolized by a similar -barrel structure. IL-37b is the largest isoform of the five variants and is expressed in a variety of normal tissues and tumors in humans [8]. It was first found in bone marrow, and neutrophils were the main place for its synthesis. It is mainly expressed in blood cells, respiratory tract, gastrointestinal tract, and skin keratinocytes [9].
To investigate the possible role of IL-35 and IL-37 in the inflammatory process of IBD, we aim to evaluate serum and mucosal levels in IBD patients. To the best of our knowledge, this is the first study that explores expression through a quantitative real-time polymerase chain reaction (qRT-PCR), immunohistochemistry, and Western blot of IL-35 and IL-37 in inflamed colonic mucosa of IBD patients.
Subjects.
A total of 27 patients with definitive diagnosis of IBD were recruited at Shandong University Qilu Hospital. 20 UC and 7 CD patients were included during the period from September 2013 to April 2014. Diagnosis was performed by the presence of history of abdomen pain, diarrhea, or blood in stool and macroscopic appearance by colonoscopy or double balloon endoscopy and biopsy compatible with IBD. The following were relevant medical records: gender, age at diagnosis, disease evolution, extension, extraintestinal manifestations, medical treatment, and clinical course of disease. UC activity was assessed by Mayo score activity index [10] and CD activity was assessed by Crohn's disease activity index (CDAI) [11]. Blood was drawn for the measurement of hemoglobin, hematocrit, and erythrocyte sedimentation rate (ESR). Additionally, 15 noninflamed controls (median age, 48 yr; 9 males/6 females) were recruited among healthy subjects undergoing a colonoscopy because of screening for colorectal cancer or polyp surveillance. These subjects were free from gastrointestinal symptoms and other inflammatory diseases. Only subjects with both macroscopically and microscopically normal colonoscopy were included. None of the healthy subjects in the study was taking any medications known to affect the gastrointestinal tract or the immune system.
Samples Collection.
A fasting blood sample was taken from all patients and healthy subjects. It was centrifuged at 1500 ×g for 20 min at room temperature and the serum was collected and stored at −80 ∘ C until analysis. During endoscopy, biopsies from colon or ileocecum or small intestine were obtained from patients and healthy subjects. Two biopsies to be used for RNA and protein assessment were snap-frozen in liquid nitrogen and then transferred to −80 ∘ C for storage until processing. One biopsy was placed in formalin for pathology and immunohistochemical staining. The study was performed after receiving written informed consent from all study subjects, and the protocol was approved by the Regional Ethical Review Board at Qilu Hospital, Shandong University.
Enzyme-Linked Immunosorbent
Assay. Serums IL-35 and IL-37 were measured using commercially available enzymelinked immunosorbent assay (ELISA) kits (Bio-Swamp). All cytokines assays were performed in duplicate and in accordance with the manufacturers' protocols.
Immunohistochemistry.
Formalin-fixed and paraffinembedded 4 m thick tissue slices were dewaxed and rehydrated before antigen retrieval. Microwave antigen retrieval method was then preformed with the slides immersed in EDTA antigen retrieval solution (ph 9.0) for 15 minutes. After that, 3% hydrogen peroxide (H 2 O 2 ) was added on the slides to inhibit the endogenous peroxidase activity. Nonspecific binding was blocked by incubation with 10% normal goat serum in 37∘C, pH7.5, for 30 min. Subsequently, mouse antihuman IL-35 monoclonal antibody (Imgenex, USA) at a 1 : 200 dilution and mouse anti-human IL-37 monoclonal antibody (Abcam, USA) at a 1 : 250 dilution were applied, respectively, to the sections that were latter incubated at 4 ∘ C overnight. On the second day, biotinylated antibody and streptavidin-peroxidase reagent (Zhongshan Biotech, China) were successively applied for 30 min each at 37 ∘ C. Finally, 3 -diaminobenzidine tetrahydrochloride (DAB) was used for visualization and hematoxylin was added to counterstain. Samples were viewed with Olympus IX81 microscope and images were produced using DP Controller 1.2.1.108. All of the slides were independently analyzed by two pathologists. IL-37: forward 5 -GCT CAG GTG GGC TCC TGG AA-3 , reverse 5 -GCT GAC CTC ACT GGG GCT CA-3 . Human glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as internal control from parallel samples because the reference gene was stably expressed, and its primers were forward: 5 -GGT GGT CTC CTC TGA CTT CAA CAG-3 and reverse: 5 -GTT GTT GTA GCC AAA TTC GTT GT-3 . Melting curve analysis was used to confirm amplification specificity. The quantification data were analyzed with LightCycler analysis software version 4.0 (Roche Applied Science, Germany) and the relative target gene expression was normalized on the basis of GAPDH. Results were expressed as an x-fold difference relative to the calibrator.
The Mann-Whitney test was used to evaluate differences of cytokine expression in the serum. One-way analysis of variance, followed by post hoc tests with Newman-Keuls test for multiple comparisons, was used to compare the 3 groups. Pearson correlation was used to calculate correlations between serum cytokines levels and Mayo score. One-way analysis of variance, followed by post hoc tests with Tukey correction for multiple comparisons, was used to compare the 3 groups in differences of mucosal expression. In all tests, a value < 0.05 was considered significant.
Serums IL-35 and IL-37 Levels Are Decreased in IBD
Patients. Serums IL-35 and IL-37 concentrations were significantly reduced in the active UC patients and active CD patients compared with healthy controls (HC) (Table 2(a)). There were also significant differences between mild UC and moderate UC and mild UC and severe UC ( < 0.05) (Table 2(b)). In contrast, UC and CD group and moderate and mild UC group seem to be statistically meaningless.
IL-35 Expression in Colonic Mucosa from IBD Patients.
In order to determine gene and protein expressions in UC and CD patients, mRNA relative expressions of Ebi3 and p35 and protein of IL-35 were quantified by qRT-PCR and Western blot analysis. IL-35 producing cells were determined by immunohistochemistry. As shown in the representative images of this analysis in Figures 2(a) (Figures 2(a) and 2(b)) was higher than that in CD patients' tissue (Figures 2(c) and 2(d)). IBD patients had significantly higher Ebi3 and p35 gene expression compared with healthy control group ( < 0.0001) (Figure 3). And the difference between UC and CD biopsies in Ebi3 and p35 mRNA expressions did not reach statistical significance ( = 0.116; = 0.779). Western blot analysis showed a significant upregulation of IL-35 ( < 0.05) in inflamed mucosa of patients as compared to controls (Figure 4). IL-35 protein expression in UC biopsies was found higher than that in CD biopsies. No relationship between IL-35 mucosal expression and the activity was observed in UC and CD patients. control group, IBD group had higher IL-37 mRNA expression ( < 0.001), whereas difference between UC and CD was still statistically significant ( < 0.001) ( Figure 6). For IL-37, protein expression trend runs like IL-35. Western blot showed that IL-37 protein expression was found to be higher in UC patients and CD patients compared to healthy subjects and the expression levels of IL-37 protein were higher in the samples of UC patients than that of CD samples ( < 0.001) (Figure 7). The mean IL-37 expression tended to be higher in severe UC samples than in mild UC samples but was not statistically significant.
Discussion
In this study, we first demonstrated that serums IL-35 and IL-37 levels were significantly lower in active IBD patients than healthy controls and were moderately negatively correlated with Mayo score in UC patients. Takahashi of IL-35 in peripheral blood. Through immunocytochemical staining, IL-37 protein is present mainly in the cytoplasm of peripheral blood mononuclear cells (PBMC) and constitutively at low levels in normal people and can be upregulated by inflammatory stimuli and cytokines [13]. IL-37 is expected to have the function of translocation into nucleus [14] and can be redistributed between intracellular and extracellular. Perhaps IL-37 transferring to intracellular is the reason why the content is down in serum. The study shows that decreased serums IL-35 and IL-37 levels may represent insufficient antiinflammatory activity in vivo and hold promise as novel biomarkers for monitoring disease activity in UC.
We have characterized the mucosal expressions of IL-35 and IL-37 in patients with IBD. The overexpression and enhanced production of IL-35 and IL-37 in colonic mucosa may play a role in the inflammatory process of IBD. Treg cells are highly infiltrated in the lamina propria (LP) of inflamed areas of UC colon compared to normal colon [15]. Treg cells induced the generation of induced regulatory T 35 cells (iTR 35 cells) in an IL-35-and IL-10dependent manner in vitro and induced their generation in vivo under inflammatory conditions in intestines infected with Trichuris muris. So we think iTR 35 cells are increased to produce more IL-35 to inhibit effective T cell (Teff cell) proliferation and suppress Th17 development. Maybe the IL-35-producing B cells also participate the production of IL-35, for Shen et al. suggested that, during experimental autoimmune encephalomyelitis (EAE), CD138 (+) plasma cells were also the main source of B-cell-derived IL-35 and IL-10 [6]. In the gut, constitutive epithelial expression of anti-inflammatory immune mediators like IL-37 might be mandatory to maintain the homeostasis of the local immune response against commensal bacteria. The protein is induced by toll-like receptor (TLR) agonists in monocytes and is expressed in tissues from patients with inflammatory diseases [16,17]. And the production of IL-37 by epithelial cells, neutrophils, and monocytes can form a positive feedback to promote more. We speculate that the increased IL-35 and IL-37, which are delivered by infiltrating immune cells, counteract mucosal inflammation in IBD. The UC group possessed the highest expression of IL-35 and IL-37 in colonic tissue, followed by CD group, and HC group expressed the lowest. The lower anti-inflammatory cytokines in CD may explain why CD is a more chronic and continued disease. However, there was no relationship between IL-35 and IL-37 mucosal expressions and the activity in UC or CD patients. It is possible that, at the beginning of inflammatory disease, a large number of the immune-suppressing cytokines were stimulated to produce to limit inflammation in the affected colon. Despite their increased frequency and potent suppressor activity in vitro, they fail to reverse the disease process. Unlike IL-37, the mRNA levels of Ebi3 and p35 did not show differences between UC and CD. We should consider the fact that IL-27 and IL-35 shared the -chain Ebi3 whereas IL-12 and IL-35 shared the p35 a-chain.
The mechanisms of IL-35 and IL-37 are not clear until now. IL-35 is involved in inflammatory diseases in the nervous system, alimentary system, bone and joint system, and respiratory system. Zandian et al. demonstrated that IL-35 had an inhibitory effect against demyelination by preventing the development of autoaggressive T cells [18]. Kochetkova et al. suggested that exogenous IL-35 could suppress the activity of CD4+T cells, and Th1 and Th17 cells and inhibit the inflammation of collagen-induced arthritis [19]. Meanwhile, it was indicated that IL-35 could help the respiratory system recover from inflammation [20]. Wirtz responses in mice [21]. One subunit of IL-35, Ebi3, is widely expressed in EBV-transformed B-lymphocytes and tissues, such as tonsil and spleen [22]. Ebi3 could negatively regulate IL-17, IL-22, and Th17 transcription factor ROR t and exert protective immunity against inflammation [23]. The subunit of IL-12, p35, could lead to the progression of Herpes stromal keratitis (HSK) in mice, which is IL-12p40 independent [24]. The two subunits of IL-35 do have their own ability to regulate immunity and the process of inflammation. When they combine together to form the heterodimer, the p35 subunit may act as a ligand, and the other subunit EBI3 may mainly exert its immunological function [25]. So far, the signaling pathway of IL-35 is not clear yet. Meanwhile, the research confirmed that IL-35 signaled through a unique heterodimer of receptor chains IL12R 2 and gp130 or homodimers of each chain [26]. Signaling through the IL-35 receptor required the transcription factors STAT1 and STAT4, which formed a unique heterodimer that bounded to distinct sites in the promoters of the genes encoding the IL-12 subunits p35 and Ebi3. IL-35 can directly suppress Teff cell proliferation, convert naive T cells into IL-35-producing iTr35 cells, suppress Th17 development, and mediate IL-10 generation. Similarly, IL-37 is a cytokine for inflammation, autoimmunity, and other immunological disorders. The IL-37 protein is highly expressed in synovial cells of patients with rheumatoid arthritis but expressed at low levels in healthy human synovial cells [5,27]. IL-37 expression was also significantly increased in the skin lesions of patients with psoriasis and in macrophages of Crohn's disease lesions [28]. IL-37 is synthesized as a proprotein which, after stimulation, is processed to its mature form [28]. Lipopolysaccharide (LPS), together with other inflammatory stimuli and cytokines, activates caspase-1 and is considered to be the major cleaving enzyme responsible for maturation of IL-1 family precursors [16]. With broadspectrum function in antibacterial, antiviral, neutralizing endotoxins and antitumor and immune regulation, IL-37 can kill the microorganism in general. And its mechanism is mainly by changing the permeability of bacterial cells. It also has the ability to raise the production of several cytokines such as IL-8 to expand acquired immune function [29]. Studies of mouse models have reached the result that IL-37 downregulates inflammation [3]. TLR, tumour necrosis factor (TNF), and other cytokines can induce the production of inflammatory cytokines. Nold et al. reported that IL-37 attenuated the abovementioned process, thus exerting antiinflammatory effects [27]. Besides, Liu et al. proved that IL-37 exerted a significant inhibition on TNF--induced IP-10 expression [30]. In the inflamed mucosa of IBD patients, T cell activation, as well as dendritic cells (DCs) activation, can be inhibited by epithelial cell-derived IL-37. The possible mechanism may be that IL-37 reduces the surface expressions of the costimulatory molecule CD86 (B7-2) and major histocompatibility complex (MHC) II on DCs. IL-37b mRNA expression induced by TNF-was mediated by the activation of MAPK and PI3K and transcription factors NF-kB and AP-1. Conversely, these signalling molecules are major mediators of the induction of proinflammatory responses in the inflamed mucosa. Thus, it became clear that in the inflamed mucosa of IBD patients a negative feedback inhibitor of inflammatory responses (the induction of IL37b) and proinflammatory responses was induced via coupled signalling pathways [31].
Conclusion
IL-35 and IL-37 are brand new potential therapeutic cytokines for IBD. Our experiment group (namely, UC group and CD group) includes 27 cases, and large-scale testing needs to be performed. Thus, further mechanism studies on the roles of IL-35 and IL-37 should be performed to make it available and useful in the future. | 3,974.6 | 2014-08-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mitochondrial Respiratory Function Induces Endogenous Hypoxia
Hypoxia influences many key biological functions. In cancer, it is generally believed that hypoxic condition is generated deep inside the tumor because of the lack of oxygen supply. However, consumption of oxygen by cancer should be one of the key means of regulating oxygen concentration to induce hypoxia but has not been well studied. Here, we provide direct evidence of the mitochondrial role in the induction of intracellular hypoxia. We used Acetylacetonatobis [2-(2′-benzothienyl) pyridinato-kN, kC3’] iridium (III) (BTP), a novel oxygen sensor, to detect intracellular hypoxia in living cells via microscopy. The well-differentiated cancer cell lines, LNCaP and MCF-7, showed intracellular hypoxia without exogenous hypoxia in an open environment. This may be caused by high oxygen consumption, low oxygen diffusion in water, and low oxygen incorporation to the cells. In contrast, the poorly-differentiated cancer cell lines: PC-3 and MDAMB231 exhibited intracellular normoxia by low oxygen consumption. The specific complex I inhibitor, rotenone, and the reduction of mitochondrial DNA (mtDNA) content reduced intracellular hypoxia, indicating that intracellular oxygen concentration is regulated by the consumption of oxygen by mitochondria. HIF-1α was activated in endogenously hypoxic LNCaP and the activation was dependent on mitochondrial respiratory function. Intracellular hypoxic status is regulated by glucose by parabolic dose response. The low concentration of glucose (0.045 mg/ml) induced strongest intracellular hypoxia possibly because of the Crabtree effect. Addition of FCS to the media induced intracellular hypoxia in LNCaP, and this effect was partially mimicked by an androgen analog, R1881, and inhibited by the anti-androgen, flutamide. These results indicate that mitochondrial respiratory function determines intracellular hypoxic status and may regulate oxygen-dependent biological functions.
Introduction
Oxygen concentration within the cell regulates many key biological functions including HIF-1a activation, glucose transport [1], potassium pump activity, intracellular calcium concentration [2], P450 family enzymes [3], and HMGR (3-hydroxy-3-methylglutaryl-CoA reductase) expression [4,5]. In cancer, oxygen concentration deep inside the lesion is low and can elicit a hypoxic response in cells, however, other mechanisms are involved in the regulation intracellular hypoxic status [6]. Mitochondrial DNA (mtDNA) encodes thirteen proteins that form essential subunits of the mitochondrial respiratory chain complexes along with the subunits encoded by the nuclear DNA. We previously showed that in prostate cancer cells from patients, reduction in mtDNA content is associated with cancer progression as estimated by Gleason grade [4]. A report by Lu et al. showed that reduction of mtDNA content induced the Warburg effect (a reduction in oxidative phosphorylation with an increase in fermentative glycolysis) [7]. We also reported that reduction of mtDNA content induces an anti-apoptotic phenotype [8,9], cancer progression phenotypes [10][11][12], and cancer progression signals such as NF-kB [13], AP-1 [13], ERK [11], JNK [11], and AKT [8]. We have also recently reported on the ability oxygen to regulate the degradation of HMGR leading to the activation of Ras in prostate cancer cells [4]. Additionally, Nguyen and colleagues have reported the hypoxia stimulated degradation of HMGR [5]. These findings led us to hypothesize that the mitochondria play a central role in cancer progression through the regulation of intracellular oxygen concentration. To this end, we employed the Oxoplate system (described in detail in the methods and in Cook et al. [4]) to evaluate the ability of cells to change the oxygen concentration in the media surrounding the cells in an open system. To evaluate the intracellular hypoxic status in living cells via microscopy we utilized BTP, which localizes to the ER (described in detail below and in Zhang et al. [14]). Pimonidazole has been utilized to detect cellular hypoxia, but we elected not to use it in this study since pimonidazole detects protein adducts induced by hypoxia rather than being a direct detector of cellular oxygen [15][16][17]. In our studies we elected to use BTP in order to observe changes in intracellular hypoxia in real time in living cells. Our data provide the first direct evidence that mitochondrial function can regulate intracellular hypoxic status and that this ability can be controlled by glucose availability and androgen in prostate cancer.
Determination of Oxygen Concentration Surrounding Cells Using Oxoplate-Oxoplate
OP96F plates were purchased from PreSens. The Oxoplate system has been described in detail by Cook et al. [4]. Briefly, wells containing 200 ml of air-saturated (K 100 ) and oxygen-free water (K 0 ) served as standards for high-and low-oxygen conditions, respectively. Oxygen-free water was prepared by the addition of sodium sulfite (1%); the well was then overlaid with mineral oil (50 ml) to prevent diffusion of oxygen into the water during the course of the experiment. The remaining wells were open to the environment inside the plate reader so that atmospheric oxygen was able to diffuse into the media of the sample wells. Cells were grown in 10 cm culture dishes under normal culture conditions until the cells had reached approximately 80% confluence and then collected by trypsinization. Cells were added to the Oxoplate at a concentration of 1610 6 cells/ml in RPMI plus 5% FCS (200 ml final volume). Fluorescence in each well was then measured every 5 minutes for 3 hours at 37uC in a plate reader (Synergy HT, BIO-TEK) in dual kinetic mode. The oxygen concentration in the wells at each time point was calculated, in micromoles, using the following equation: K 100 is the I R of the well containing air-saturated water. K 0 is the I R of the well filled with oxygen-free water. I R for each sample is calculated by dividing the fluorescence of the indicator dye (544/645 nm) by the fluorescence of the reference dye (544/ 590 nm). All samples were run in triplicate.
Determination of Oxygen Consumption Rate using Oxytherm
Oxygen consumption rate in a closed environment was measured using the Oxytherm system (Hansatech). Samples were run at a cell concentration of 1610 6 cells/ml for 10 minutes at 37uC in the appropriate culture media for each cell line. All samples were run in triplicate. Further detailed methods were described in Cook et al. [4].
Laser Confocal Microscopy
Unless otherwise noted, cells were maintained in a stagemounted atmospheric box (Pathology Devices) at 37uC, 5% CO 2 , and 75% humidity during the course of the experiments. All samples were analyzed on an Olympus Fluoview FV1000 laser confocal microscope. Acetylacetonatobis [2-(29-benzothienyl) pyridinato-kN, kC3'] iridium (III) (BTP) (515 excitation/620 emission) was utilized in all confocal experiments at a concentration of 5 mM. BTP has been described in detail by Zhang et al. [14]. BTP is a phosphorescent compound that is phosphorescent in lowoxygen conditions and is quenched in the presence of oxygen. The extent of quenching is dependent upon intracellular oxygen concentration. Samples were incubated in the presence of BTP for 1 hour before imaging (under normoxic (20% O 2 ), hypoxic (0.2% O 2 ), or hyperoxic (40% O 2 ) conditions as detailed for specific experiments). In all experiments, cells were plated at a cell density of 1610 5 cells in 3 cm dishes with inset cover slips (Mattek). LNCaP and MCF-7 cells were incubated for at least for 48 hours, depending on cell adherence and morphology, before the beginning of experiments. All other cell lines were plated 24 hours before the beginning of experiments. At the start of each experiment samples were incubated with BTP plus any additional experimental conditions detailed below for one hour before viewing. Experiments that were carried out in the absence of serum or in the presence of R1881 or flutamide were incubated for twenty-four hours before the addition of BTP. Images from all microscopy experiments were processed using the FV10-ASW 3.1 Viewer (Olympus). Cell phosphorescence for 10 cells per plate was quantified using ImageJ. Cell phosphorescence was calculated as corrected total cellular phosphorescence (CTCP) according to the following equation: CTCP~integrated pixel intensity{(area of selected cell| mean pixel intensity ofbackground) This method takes into account the area of the cell when determining pixel intensity of a cell. Images have been falsecolored green for easier viewing.
Detection of HIF-1a by Western Blotting
For a positive control for the detection of HIF-1a, cells were treated with 200 mM CoCl 2 for 6 hours before nuclear extract collection. For samples that were exposed to hypoxia, the cells were incubated at 0.2% oxygen (5% CO 2 , 94.8% N 2 ) for 6 hours before nuclear extract collection. Nuclear extracts were collected by first addition of 10 mM HEPES buffer plus 1 mM DTT and protease and phosphatase inhibitors (100X Halt protease and phosphate inhibitor cocktail, #1861281 Thermo Scientific) at a ratio of 400 ml per 2610 6 cells. After the cells were allowed to swell for 15 minutes 10% NP-40 was added at a ratio of 25 ml per 2610 6 cells. After centrifugation the cytosolic fraction was removed and the pellet was again resuspended in 10 mM HEPES and centrifuged again to ensure better removal of cytosolic components. The nuclear pellet was then resuspended in 20 mM HEPES buffer plus DTT and protease and phosphatase inhibitors at a ratio of 25 ml per 2610 6 cells for 15 minutes. Following centrifugation the nuclear fraction was collected. 20 mg of protein from each sample was then separated on an 8% SDS-PAGE gel. Samples were then electro-blotted to a PVDF membrane. The membrane was blocked with 5% milk in TBST. The blot was probed with anti-HIF-1a antibody (Santa Cruz Biotechonolgy) (1:250 dilution in blocking buffer). The secondary antibody was HRP-linked anti-mouse (2:5,000). PCNA (Santa Cruz Biotechnology)served as a loading control (1:200 dilution). Bands were then visualized using ECL Prime (GE Healthcare) and the ImagQuant LAS 4000 phosphoimager (GE Healthcare). Densitometric analysis was carried out using ImageJ. Band intensities were normalized to the corresponding PCNA loading control intensities.
BTP Incorporation
LNCaP and LNr0-8 cells were seeded on a 96-well plate at a cell density of 1610 6 cells in 200 ml. The cells were then allowed to attach overnight, and the next day the media was removed and the cells were recovered with PBS containing 100 nM BTP. After one hour incubation the supernatant was removed and transferred to a new 96-well plate. The absorbance was then measured at 483 nm. Incorporation of BTP into the cells was determined as the difference in the OD of PBS that had been added to the cells versus cell-free control.
Statistical Analysis
Statistical significance was calculated using the two-tailed Student's t-test. P-values below 0.05 were considered to be statistically significant.
Results and Discussion
Association of Oxygen Consumption Rate and the Induction of Hypoxia Inside of and Surrounding the Cells without Exogenous Hypoxia Table 1 shows all cell lines used in this study in conjunction with a characteristics and origins of cell lines based on a search of the literature. Our previous report suggests that mtDNA content is associated with oxygen consumption rate [4]. We hypothesized that consumption of a large amount of oxygen in the cells with high mtDNA content induces hypoxia. To investigate this hypothesis, we measured oxygen consumption rate using Oxytherm (closed system) and oxygen concentration surrounding the cells in a 96-well plate using Oxoplate in an open environment. In the Oxoplate system oxygen is able to diffuse into the media as shown with the diffusion of oxygen into wells containing water with 2.5 mg/ml of sodium sulfite ( Figure S1). Figure 1 shows the correlation between oxygen concentration surrounding the cells and oxygen consumption rate. We found that the prostate cancer cell line, LNCaP [19] with high mtDNA content (in comparison with normal prostate tissue) [4,20], showed strong hypoxia surrounding the cells (although, surface of the culture medium is normoxic) and a high rate of oxygen consumption (Fig. 1). The prostate cancer cell line, PC-3 [21] with low mtDNA content [4] had a limited ability to consume oxygen and induce hypoxia in the media surrounding the cells as compared with LNCaP ( Fig. 1). C4-2 cells (prostate cancer cell line) [22]with a moderate level of mtDNA content (as compared to LNCaP (high) and LNr0-8 (low)) [4] showed a moderate extracellular hypoxia-inducing ability with a moderate rate of oxygen consumption (Fig. 1) as compared with LNCaP. The breast cancer cell line, MCF-7 [23], had a strong extracellular hypoxia-inducing ability but was not as strong as LNCaP (Fig. 1). The estrogen receptor-negative breast cancer cell line, MDAMB231 [24,25], had a limited ability to induce hypoxia surrounding cells and to consume oxygen as compared with MCF-7 (Fig. 1). Oxygen concentration surrounding the cells in the Oxoplate was well correlated with oxygen consumption rate ( Figure 1, R 2 = 0.82). Induction of hypoxia surrounding cells is likely to be caused by the high oxygen consumption of cells combined with low oxygen diffusion [26].
We observed that the ability of LNCaP to induce hypoxia surrounding cells is cell number dependent. At low cell numbers (1610 5 and 1610 4 cells/ml) LNCaP was unable to induce hypoxia surrounding cells in the Oxoplate system ( Fig. 2A-B). We then investigated whether cells with high oxygen consumption could be hypoxic intracellularly even though extracellular oxygen concentration surrounding cells is high in the low cell density. To examine intracellular hypoxic status, we employed the phosphorescent, hypoxia-sensing dye, BTP (described in detail in methods). This dye is phosphorescent in the absence of oxygen, and it is quenched by oxygen. Thus, more intense phosphorescence indicates a lower intracellular oxygen concentration [14]. Fig. 3A shows micrographs of cells stained with BTP (upper panels) plus DIC images (lower panels) showing the position of the cells. Fig. 3B is a quantification of BTP phosphorescence from panel A (described in detail in the methods) reported as average corrected total cellular phosphorescence (CTCP) 6 the standard error. Quantification of BTP phosphorescence was carried out to allow for easier interpretation of the data. LNCaP exhibited high BTP intensity indicating stronger hypoxia under exogenously normoxic conditions than C4-2 and PC-3, (Fig. 3A-B, P,0.001, n = 10). MCF-7 was significantly more hypoxic than MDAMB231 (Fig. 3C-D, P,0.01, n = 10). As demonstrated above in Fig. 3A-B, LNCaP was able to induce intracellular hypoxia despite the normoxic condition surrounding the cells ( Fig. 2A-B, Fig. 3A-B). This intracellular hypoxia in LNCaP and MCF-7 at low cell density is likely to be caused by the high oxygen consumption of mitochondria as compared with oxygen incorporation through the membrane. To investigate whether BTP is incorporated in similar amounts in different cells, we used LNCaP, which shows the highest BTP staining, and LNr0-8, which shows lowest. To 1610 6 cells in 0.2 ml, we added 100 nM BTP. Similar amounts of BTP was incorporated to LNCaP and LNr0-8 (23.6 (average) 60.6 (SE) % and 24.9 (average) 62.2 (SE) %), respectively. In light of these results, we decided to delve further into the mechanisms of how intracellular hypoxia is regulated.
Mitochondrial Respiratory Function Induces Intracellular Hypoxia
To demonstrate that mitochondrial respiratory function is the key mediator in the regulation of intracellular hypoxia, we used rotenone (a specific inhibitor of mitochondrial respiratory chain complex I) to inhibit respiratory function in LNCaP cells.
Following treatment with rotenone, we observed a marked decrease in BTP phosphorescence relative to controls (Fig. 4A-B, P,0.001, n = 10) indicating that rotenone inhibited oxygen consumption and decreased the level of intracellular hypoxia in a dose-dependent manner. Additionally, we investigated how changes in the mtDNA content of the cells can influence intracellular hypoxia. Mitochondrial respiratory function is eliminated in the mtDNA deficient LNr0-8 (derived from LNCaP) cell line because of the loss of the 13 mitochondrial respiratory proteins encoded in mtDNA. While lacking mtDNA, these cells still possess mitochondrial structures [4]. Respiratory activity is recovered, at least partially, in mtDNA reconstituted LNCyb cell line [4,27]. LNCaP exhibited strong intracellular hypoxia under exogenous normoxic conditions (Fig. 5A-B). LNr0-8 cells exhibited minimal intracellular hypoxia above background levels and induction of hypoxia is greatly reduced as compared with LNCaP ( Fig. 5A-B, P,0.001, n = 10). LNCyb exhibited intracel- Nuclear extracts from all three cells lines following treatment with CoCl 2 served as positive controls for the detection of HIF-1a (center). All three cell lines were also exposed to hypoxia (0.2% O 2 ) for 6 hours (right). PCNA served as a loading control. (D) Densitometric analysis of western blotting results in C. Band intensities were normalized to corresponding PCNA loading control band. doi:10.1371/journal.pone.0088911.g005 lular hypoxia, but this hypoxia was not quite as strong as in LNCaP (Fig. 5A-B). These results indicate that intracellular hypoxia was dependent on mitochondrial respiratory function which is regulated by mtDNA content. The intermediate level of intracellular hypoxia observed in LNCyb was expected, based on previous work with LNCyb cells in our lab; these cells possess a phenotype that is between LNCaP and LNr0-8 in terms of oxygen consumption [4]. Taken together, these data indicate that mitochondrial respiratory function is required to induce intracellular hypoxia.
HIF-1a Activation in LNCaP Cells is Mitochondrial Respiration Dependent
HIF-1a is considered to be one of the primary regulators of the cell's response to low oxygen tension [28]. HIF-1a stability is tightly linked to cellular oxygen availability via its regulation by oxygen-dependent enzymes and is stabilized under exogenous hypoxic conditions [29]. Stable HIF-1a is able to translocate into the nucleus where it serves as transcription factor for many hypoxia-regulated genes. Mechanisms of non-hypoxic regulation have also been documented [28]. Regulation of HIF-1a by mitochondrial function has also been implicated by others [29]. Therefore, we investigated whether the intracellular hypoxia that we observed in LNCaP cells induces HIF-1a stabilization under normal culture conditions. To evaluate HIF-1a activation, we examined the expression of HIF-1a in the nuclei of LNCaP, LNr0-8, and LNCyb by western blotting. All three cell lines treated for 6 hours with CoCl 2 served as a positive control for the detection of HIF-1a (Fig. 5C-D) as this compound inhibits the degradation of endogenously produced HIF-1a. As expected, intracellularly hypoxic LNCaP (Fig. 5C-D) showed HIF-1a expression in the nucleus under normal culture conditions (Fig. 5C-D). LNr0-8, which is normoxic intracellularly (Fig. 5C-D), showed only a slight amount of detectable HIF-1a in comparison to LNCaP. LNCyb, with an intermediate level of intracellular hypoxia, showed a slight increase in HIF-1a expression relative to LNr0-8. The low, but still detectable, levels of HIF-1a found in LNr0-8 may be the result of remaining oxygen consumption without mtDNA-encoded mitochondrial respiratory chain proteins or an underlying non-hypoxic regulation [29]. However, the majority of the HIF-1a level in the nucleus appears to be regulated by mitochondrial function (and therefore intracellular hypoxia) as evidenced by the partial restoration of HIF-1a in LNCyb. In the CoCl 2 treated controls LNCaP showed only a slight increase in HIF-1a expression (Fig. 5C-D). This suggests that HIF-1a levels may already be near the maximum in LNCaP cells under normal incubation conditions. As expected, CoCl 2 stabilized HIF-1a in LNr0-8 showing that these cells endogenously produce HIF-1a but it is degraded in the absence of COCl 2 (Fig. 5C-D). HIF-1a expression was also increased in LNCyb cells, as expected, upon treatment with CoCl 2 (Fig. 5C-D). Additionally, HIF-1a expression was enhanced in LNCaP and slightly enhanced in LNCyb upon exposure to 0.2% oxygen for six hours (Fig. 5C-D). LNr0-8 cells showed only a very slight increase in HIF-1a following exposure to hypoxia (Fig. 5C-D). This is in agreement with previous findings by others showing that HIF-1a cannot be strongly induced under hypoxia in cells lacking mitochondrial function [30]. These results suggest that intracellular hypoxia determined by mitochondrial respiratory function regulates HIF-1a activation leading to hypoxia-related gene expression.
Mitochondrial Function is Still Required to Induce Strong Intracellular Hypoxia, even under Exogenous Hypoxic Condition
We investigated how the change in exogenous oxygen concentration affects intracellular hypoxic status. First, we exposed LNCaP cells to exogenous hypoxia (0.2% O 2 ) or hyperoxia (40% O 2 ) for 1 hour. Incubation under normal oxygen conditions (20% O 2 ) served as a control. Exposure of LNCaP cells to hypoxia significantly increased BTP phosphorescence indicating an increase in intracellular hypoxia (Fig. 6A-B, P,0.001, n = 10). Conversely, BTP phosphorescence was greatly reduced when LNCaP was exposed to exogenous hyperoxia (Fig. 6A-B, P, 0.001, n = 10) indicating a shift from intracellular hypoxia to normoxia. We next examined the effects of exogenous hypoxia (0.2% O 2 ) on LNCaP, PC-3, and LNr0-8. LNCaP, PC-3, and LNr0-8 incubated for 1 hour under normal culture conditions (20% O 2 ) served as controls. LNr0-8 and PC-3 showed limited BTP phosphorescence (Fig. 7A-B, P,0.001, n = 10) indicating intracellular normoxia under 20% O 2 relative to LNCaP. We exposed LNCaP, PC-3, and LNr0-8 to exogenous hypoxia (0.2% O 2 ) for 1 hour. Exogenous hypoxia slightly increased BTP phosphorescence in LNr0-8 but to a far less than that seen in LNCaP in the normoxic condition, suggesting that exogenous hypoxia in LNr0-8 was not sufficient to induce strong intracellular hypoxia as observed in LNCaP even under normoxic conditions ( Fig. 7A-B, P,0.001, n = 10). PC-3 showed a great increase in BTP phosphorescence with exogenous hypoxia (Fig. 7A-B, P, 0.001 when compared to normoxic PC-3, n = 10). BTP phosphorescence in PC-3 under the exogenous hypoxic condition is higher than that in LNCaP under normoxic condition (Fig. 7A-B). These results demonstrate the induction of strong intracellular hypoxia by exogenous hypoxia in PC-3 but not in LNr0-8 (Fig. 7A-B). We believe that the observed differences can attributed to mitochondrial respiratory function, as PC-3 still possess a small amount of mitochondrial respiratory function (see above) but LNr0-8 does not due to a complete absence of mtDNA [4]. We expect that the intracellular oxygen concentration in LNr0-8 under the exogenous 0.2% oxygen concentration is the same or less than 0.2%. Since BTP phosphorescence in LNCaP under normoxic condition is higher than that in LNr0-8 under the exogenous hypoxic condition (0.2% oxygen), intracellular oxygen concentration in LNCaP under exogenous normoxic condition should be less than 0.2% oxygen. These results demonstrate that mitochondrial respiratory function is a key regulator for the induction of intracellular hypoxia. To further explore the mechanism, we examined various nutrients in the culture media, such as, glucose and cellular growth regulators, such as, androgen for their effects on intracellular oxygen concentration.
Glucose can Regulate Hypoxia in a Parabolic Dosedependent Fashion
Using Oxoplate we found that 4.5 mg/ml of glucose (normal glucose concentration contained in DMEM) could induce hypoxia surrounding cells in LNCaP where other potential sources of oxidative phosphorylation (pyruvate and hydroxyurea) could not (Fig. 8). Experiments were carried out in glucose-and pyruvatefree DMEM medium plus dialyzed FCS. In Fig. 8, samples in the presence of hydroxyurea or pyruvate showed a weak decrease in oxygen concentration surrounding the cells as compared with no glucose control and with the normal glucose concentration (4.5 mg/ml). The glucose-free control showed a strong decrease in extracellular oxygen concentration as compared with samples with 4.5 mg/ml of glucose followed by a plateau at the 40 minute time point (Fig. 8). We then investigated the effect of glucose concentration on oxygen concentration surrounding cells and oxygen consumption. The ability of LNCaP to induce hypoxia surrounding the cells is greatly reduced by the complete depletion of glucose when compared to the 4.5 mg/ml control and showed a steep decrease in oxygen concentration surrounding the cells followed by a plateau at the 40 minute time point (Fig. 9). 0.45 mg/ml (10% of DMEM glucose concentration) and 0.0045 mg/ml (0.1% of DMEM glucose concentration) induced a moderately increased and decreased ability of the cells to induce hypoxia surrounding the cells, respectively, relative to control (4.5 mg/ml) (Fig. 9). Additionally, 0.0045 mg/ml of glucose showed a similar pattern to that of the zero glucose sample with a strong initial decrease in extracellular oxygen followed by a plateau at the 60 minute time point (Fig. 9). The sharp decrease in extracellular oxygen in the low or no glucose conditions may be caused by utilization of available glucose for oxidative phosphorylation to generate energy. The plateau observed in the zero and low glucose conditions may be due to the depletion of remaining glucose. At the 0.45 mg/ml concentration of glucose there was a slight increase in hypoxia surrounding the cells relative to the 4.5 mg/ml of glucose control (Fig. 9). 0.045 mg/ml glucose was found to be the strongest inducer of hypoxia surrounding the cells (Fig. 9). These data indicate that glucose dosage influences the induction of hypoxia surrounding the cells by LNCaP in a parabolic fashion (Fig. 9). Additionally, oxygen consumption rate was maximal for 0.045 mg/ml of glucose when compared with 4.5 and 0 mg/ml of glucose in agreement with the Oxoplate results ( Figure S2). We then investigated the effects of glucose concentration on intracellular hypoxia as determined by BTP. 0.045 mg/ ml glucose induced the strongest BTP phosphorescence indicating the induction of strongest intracellular hypoxia of the concentrations tested relative to the 4.5 mg/ml of glucose control (Fig. 10A-B, P,0.001 when compared to control, n = 10). 0.45 and 0.0045 mg/ml of glucose both produced BTP phosphorescence that was lower than 0.045 mg/ml of glucose indicating that the relationship between glucose availability and intracellular oxygen concentration is parabolic (a bell-shaped curve in terms of glucose concentration versus level of BTP phosphorescence). This parabolic relationship in terms of intracellular oxygen concentration and glucose availability may be due to the Crabtree effect (the inhibition of oxidative phosphorylation in high glucose concentrations [31,32]). The cell is largely depend on glycolysis when glucose is plentiful but resorts to oxidative phosphorylation when glucose starts to become limiting. This switch to oxidative phosphorylation under low glucose conditions is advantageous to the cell since the amount of ATP produced by oxidative phosphorylation is extremely high in comparison to the ATP yield from glycolysis. At high glucose concentrations it is more advantageous to produce energy by glycolysis rather than by oxidative phosphorylation so that the cell has adequate energy production without the production of harmful reactive oxygen species in the mitochondrial respiratory chain; this is considered to be caused by the Warburg effect [33].
Androgen is One of the Regulators of Intracellular Oxygen Concentration
Estrogens have been shown to be modulators of mitochondrial respiratory activity, and androgen receptor has been shown to influence metabolism in prostate cancer [34,35]. However, the effects of androgen on the mitochondria remain largely undocumented [25]. Both estrogen receptor and androgen receptor have been shown to localize to the mitochondria [36][37][38]. Therefore, we hypothesized that androgen may be a modulator of mitochondrial function in LNCaP cells. As shown in Fig. 11A-B, LNCaP in the presence of FCS showed a stronger ability to induce hypoxia surrounding the cells than the FCS-free control (P,0.01, n = 3). In agreement with this finding, LNCaP in the presence of FCS showed stronger BTP phosphorescence than that in the absence of FCS, indicating stronger intracellular hypoxia in the presence of FCS (Fig. 11C-D, P,0.001, n = 10, when compared to the FCS-free control). The association of estrogen, androgen, and estrogen receptor on mitochondrial function have been implicated [39], and LNCaP has been demonstrated to be androgen sensitive [19]. However, whether estrogen and androgen can affect intracellular hypoxic status have yet to be demonstrated. Therefore, we hypothesized that androgen affects intracellular hypoxia in LNCaP cells. In microscopic studies using BTP, R1881, a synthetic androgen, increased phosphorescence to approximately four times the control level indicating that R1881 induced hypoxia in the absence of FCS (Fig. 12A-B, P,0.01, n = 10). Addition of flutamide, an inhibitor of the androgenandrogen receptor binding, in the presence of FCS reduced BTP phosphorescence indicating that flutamide reduced intracellular hypoxia to a level similar to the FCS-free control, possibly by blocking small amounts of androgen present in the serum or synthesized by LNCaP (Fig. 12A-B, P,0.001, n = 10) [40,41]. The normal concentration of total testosterone (androgen) in RPMI supplemented with 10% normal FCS has been reported to range from 55.1 to 97.5 pM; normal human males have a total testosterone level of 10-35 nM [42]. Flutamide in the absence of serum showed a slight increase in BTP phosphorescence relative to the serum-free control (CTCP/Cell: 190200613038 (SE) % versus 144549616316 (SE) %, respectively) ( Fig. 12A-B, P, 0.05 when compared to serum-free control, n = 10) implicating androgen synthesis from LNCaP. These data are the first to indicate that androgen is one of the modulators of cellular respiration and intracellular oxygen level.
Here it should be pointed out that in vivo oxygen concentration within tissues ranges from 0% to 14%, depending on the tissue, indicating the potential roles of exogenous hypoxia in inducing intracellular hypoxia [43]. We showed that LNCaP induces strong endogenous intracellular hypoxia even under normal culture conditions (20% O 2 , 5% CO 2 ). PC-3 induces strong hypoxia only under exogenous hypoxia. However, cells with no mtDNA failed to induce strong intracellular hypoxia even under exogenous hypoxia. These findings implicate that oxygen consumption via mitochondrial respiration is required to induce strong intracellular hypoxia under exogenous hypoxic condition. Our results also suggest that endogenous intracellular hypoxia-induced HIF-1a activation ( Fig. 5C-D) is dependent on mitochondrial respiratory function and may regulate many hypoxia-related cell processes. Since the K m of cytochrome a+a3, the center for consuming oxygen in mitochondrial respiratory chain complex IV, is very low and the reaction is very rapid [44], reduction of the intracellular oxygen concentration by the consumption of oxygen by cytochrome a+a3 can likely inhibit enzymatic activity of other oxygen requiring enzymes such as P450 and Cyp51 [5,44,45] by reducing oxygen availability. Additionally recent findings show that the protein synthesis machinery is also regulated by intracellular oxygen concentration [46]. The fact that androgen can regulate intracellular oxygen concentration indicates that androgen can regulate oxygen requiring enzymes. Glucose and androgen may be working synergistically to increase oxygen consumption.
An interesting trend was noticed in our data. Well differentiated cell lines such as LNCaP and MCF-7 [19,23] showed strong hypoxia-inducing ability. The moderately differentiated cell line, C4-2 [22], showed a moderate ability to induce extracellular and intracellular hypoxia. The poorly differentiated cell lines, PC-3 [21] and MDAMB231 [24], had only a limited ability to induce extracellular and intracellular hypoxia. This trend in in vitro cell lines suggests that, at least in prostate cancer, and possibly in breast cancer, the degree of cancer progression can be related to cellular oxygen status. In combination with our previous findings linking decreased oxygen consumption in prostate cancer leading to the activation of Ras [4], our findings open up new avenues of investigation of the pathophysiology and the progression of prostate cancer. Figure S1 Diffusion of oxygen into water containing 2.5 mg/ml of sodium sulfite with or without layered mineral oil. File S1 Materials and methods for Figures S1 and S2 are detail in File S1. | 7,126 | 2014-02-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Specifying and implementing web content management software
Many organizations are now considering using a web content management software (CMS) product to facilitate the effective management of web sites and intranets. With many hundreds of open source and commercial CMS products to choose from the process of establishing requirements and selecting the best product for the long term can be both complex and risky. The implementation itself also needs to be managed with care, especially where there is a considerable volume of legacy content to be migrated to the new system. This paper provides an introduction to the benefits and challenges of content management software. It emphasizes that it may take a year from the time that the decision to assess the requirements for a CMS is taken to a full and effective deployment.The costs of implementation are also outlined, as these are invariably significantly greater than the CMS licence cost.
The way in which an organization presents its products and services is now very largely influenced by the way in which these are presented through its web site. This is the case whether it is a publisher setting out a range of journals, or a library providing services to customers. In many cases the library site is also an element of an intranet, and so may have to present a different face to external and internal audiences. The majority of web sites are still built using page authoring software such as Microsoft Front Page or Macromedia Dreamweaver and in many cases these products can provide very user-friendly web experiences.
However, the publishing and business environments are now subject to rapid change. Publishers continue to acquire new titles and other publishers. In the business environment, new departments and sources of information result in the need to revise the information architecture of the web site or intranet on a regular basis. This is where the apparently low-cost investment of using page authoring software to compile static web sites inevitably causes problems in modifying text, links and graphics across a substantial, and often unknown, number of pages.
In the case of intranets there is an additional problem as organizations realize that using the webmaster approach of a small team of specialists adding content to the site just does not scale.
Content is not added quickly enough and the site soon loses the 100% trust of employees that it is essential for an intranet to maintain. Of course, in theory every appropriate member of staff could be given access to, and training on, Front Page (as an example), but often staff only publish on an occasional basis, and so have to go through a relearning cycle just to put up the quarterly sales report. This is neither effective nor efficient.
Over the last couple of years there has been increasing interest in using content management software to provide a more flexible site authoring environment, and in this article the basic features of content management are set out.
CMS and CMS
An initial problem is that the expression 'CMS' is used in two different ways. CMS can stand for both 'content management software' and 'content management system'. A content management system enables content to be managed on a lifecycle basis, supporting its creation, re-use and presentation. Content management software is, in effect, specialized database software that supports a content management system. It is possible to have a content management system without content management software, and just having installed content management software does not
MARTIN WHITE
Managing Director Intranet Focus Ltd mean that the organization has a content management system. Confused? You are not alone!
The key features of content management software are ■ content creation through templates, which requires no technical expertise ■ content review supported by work-flow ■ content versioning closely managed ■ content tagged and held in a repository ■ content repurposed for delivery to specific audiences ■ site design framework independent of content structure ■ comprehensive administration functions.
It is worth looking at each of these in sequence.
The objective of using templates is to enable employees to contribute content without any need to be familiar with page authoring software. It also enables the site manager to ensure that standards for page presentation are maintained, and that appropriate metadata is added in a consistent manner. The concept of work flow is very familiar to publishers and to libraries. The use of work flow in a CMS is to enable content to be reviewed prior to publication, but all too often this can be taken to extremes. Just because work flow is provided does not mean to say that every content process has to be handled in this way. Indeed the default should be that there is no work-flow process used unless there is a clear value added to the user of the site.
In a CMS, every time that a piece of content is checked out of the system and even the smallest change is made (perhaps to Anglicize a word) then a new version will be created. This can be very valuable in ensuring that content changes are tracked. However, a long document that is being developed by a number of authors can quickly build up a very long list of versions, and identifying intermediate versions can be difficult.
Behind every CMS is a database, though most CMS vendors refer to it as a repository. Each piece of content is individually tagged and databased for re-use. This is an appropriate place to note just one of the differences between a document management system and a content management system. In a document management system the emphasis is on maintaining the integrity of the entire document. A content management system will break the document up into individual sections, images, tables, logos, etc. and in effect hold a virtual version of the document which can either be rebuilt on demand, or individual elements re-used in other documents.
Most CMS products require the organization to have a suitable database environment, such as SQLServer, Oracle, mySQL or similar. The problem often arises that although the organization does have SQLServer licences, there are not enough such licences to support a CMS. This is just one of the many hidden costs of implementing a CMS.
It is at this stage that metadata management becomes important. There is no point in fragmenting a document into individual content components if there is insufficient information to identify them and re-use them. Without a consistent and thorough metadata scheme the investment in a CMS will be wasted. The proposed implementation of a CMS is usually the first time an organization has had to face the need to develop a metadata scheme. The basic building blocks are those in the Dublin Core Metadata set, but there is also a specialized scheme, PRISM, for the publishing industry.
CMS software will be able to repurpose content automatically to take into account different styles and formats used in a web environment. In general there is an emphasis in CMS selection on content contribution, and not enough on working through how publishing will be managed.
One important attribute of a CMS is that the content and the design can be divorced from each other. Although much can be accomplished with cascading style sheets (CSS) in terms of creating new designs and layouts for a web site, the options within a CMS are usually more powerful. However, attention needs to be paid right at the outset to information architecture issues. This is because each CMS will need to be customized during the implementation, and although changes are possible later it is rarely as easy as the CMS vendor will probably suggest.
Most CMS products provide an almost endless array of administration capabilities. These are especially valuable in a decentralized content contribution situation so that reports can be produced about content that has not been updated, or to highlight content from a specific contributor, who has left the organization for example, enabling it to be allocated to another employee. Other important aspects of the administration capabilities are the management of document security and the ability to design new templates.
Technology options for a CMS
There are four CMS options. The first is for the organization to build its own CMS or to ask their web agency to do so. In theory this often looks like the low-cost option, but in practice it is easy to end up with a solution that meets current requirements but cannot be developed in the future without significant additional development costs. When a commercial CMS vendor funds product enhancements the cost can be spread across much of the current client base as well as new customers.
There has been a lot of recent interest in using open source software to build web sites and intranets. The software is either free or can be downloaded for a nominal fee. In many cases this is a good option, but only where the organization has the relevant development skills as in general this software is rather poorly documented and certainly not supported other than through informal discussion threads.
The number of commercial CMS vendors continues to grow each month, and there are probably around 300 or more at present, though a small number (perhaps ten) account for perhaps 50% of all the current installed base. CMS products are complex application suites, and are really a set of tools that need extensive customization. The higher the price the higher the degree of functionality (usually) and the more the product needs to be customized to meet the needs of an organization.
Another option is to purchase a portal application. However, a portal is really only a desktop presentation of content from a wide range of databases and other sources, and usually has only a very basic level of content contribution functionality.
The final point to consider is the value of a search engine. Many web sites have only limited search functionality and even large intranets often fail to implement a search engine of sufficient utility. Many content management solutions come with search software included, but this is often only there to assist a content contributor to locate content for re-use, and not to provide searching across the site. When a search engine module is included it may well be a 'light' version without some of the more sophisticated features.
One of the questions that is yet to be resolved in practice is the extent to which an 'enterprise' solution is either possible or desirable. In other words, can any one CMS solution meet the needs of a graphics-rich web site with e-commerce applications, an intranet and an extranet? At first glance the appeal of reducing development costs and licence fees may seem attractive but in fact the level of customization needed for each application, and the potential risks of the entire project getting behind schedule through unforeseen implementation issues are powerful arguments for not considering this approach at present. The current alternative is to buy 'best-of-breed' solutions for the web site and the intranet, and glue them together with the effective use of XML and an integrated approach to metadata.
Implementing a CMS
The implementation of a CMS is a major task for any organization. Even if only a few content contributors are involved any failure to meet expectation will be visible to every customer and every employee. The majority of IT departments are unfamiliar with this type of software, and so have little experience on which to base decisions.
Without doubt it is essential that a very clear requirements document is prepared, which sets out the business requirements and is not just a list of functions derived from a book or web site. This document should also form the basis for the selection of the vendor, even if just open-source software is being considered.
A typical time-scale for CMS implementation is along the following lines: Month 0 Start of the selection process Month 2 Completion of invitation to tender Month 3 Responses received and an initial selection carried out Month 4 Detailed discussions with the shortlisted vendors Month 5 Decision made on a preferred vendor It may be possible to reduce the time a little, but only by a month or so -which means that if you are now reading this paper and you decide that there are merits in implementing a CMS then you will not have the CMS in full operation until 2005! The schedule becomes a lot longer if the organization needs to go to a full public procurement, for example in the Official Journal of the EU. Add four months.
There are four important implementation issues that need to be considered at the outset. The first of these is that there is a considerable variation in the way that a vendor prices a CMS. It could be per seat, per site, per server, per module, indeed per just about anything. A commercial vendor will also be charging 'standard' licence support fees of 20% of the base licence cost. In addition there will be the cost of the consultancy to carry out the customization of the CMS. These products are specialized, and quite proprietary, so there are very few freelance consultants available for hire. As a very rough rule of thumb the additional costs over and above the basic software licence are likely to be at least as much again, and can easily be three or four times as great. This means that if the organization has allocated £300,000 for the CMS implementation, then it will only be able to afford to buy products with a licence cost of under £100,000.
The second problem is that of migrating legacy content to the new CMS. This is not straightforward, and it may be necessary to 'touch' each page, convert the look and feel and also add appropriate metadata. Assuming that each page of a 10,000-page site or intranet takes just two minutes to move into the new format, this still represents around two person-months of effort. In reality the time taken is much longer, and it is not a process that is at all easy to automate.
CMS implementation is a sufficiently complex project that it cannot be carried out by the current webmaster and team without risking a very serious impact on the quality of the existing site. Additional staff may have to be recruited and trained, and yet may not be needed once the CMS is implemented. It is the cost of additional staff during the implementation phase that adds considerably to the overall cost of the CMS project.
Finally, the implementation of a CMS will result in a change of culture, especially if the CMS is used to decentralize content contribution in an intranet. An issue here is that staff will be asked to contribute to the intranet, but their managers may not be aware of how much time and effort is involved, even with template-driven contribution. Too many intranets are run on a 'hobby' basis, and it is very important that web contribution roles are included in job descriptions and evaluations from the outset.
In conclusion
This paper sets out only some of the basic elements of content management software and systems. Implementing a CMS, either for a web site or an intranet, is a major project with a lot of hidden costs. The benefits can be substantial in being able to enhance communications with customers, prospects and employees, but these benefits only arise when there are very clear objectives, expectations are balanced by resources, and the true scale of the implementation of a CMS is fully understood. | 3,634.4 | 2004-03-15T00:00:00.000 | [
"Computer Science"
] |
DNA for Nano-bio Scale Computation of Chemical Formalisms Using Higher Order Logic (HOL) and Analysis Using an Interdisciplinary Approach
,
Introduction
The idea that molecular systems can perform computations is not new and was indeed more natural in the pre-transistor age.Most computer scientists know of von Neumann's discussions of selfreproducing automata in the late 1940s, some of which were framed in bio-molecular terms based on bio-inspiration.Here the basic issue was that of bootstrapping: can a machine construct a machine more complex than itself?Important was the idea, appearing less natural in the current age of dichotomy between hardware and software, that the computations of a device can alter the device itself."This vision is natural at the scale of molecular reactions, although it may appear "utopic", to those running huge chip production facilities.Alan Turing also looked beyond purely symbolic processing to natural bootstrapping mechanisms in his work on self-structuring in molecular and biological systems.Purely chemical computers have been proposed by Ross and Hjelmfelt extending Turing's approach" [1][2][3][4][5][6][7] .
In biology, the idea of molecular information processing took hold starting from the unraveling of the genetic code and translation machinery and extended to genetic regulation, cellular signaling, protein trafficking, morphogenesis and evolution -all of this independently of the development in the lifesciences.For example, because of the fundamental role of bio-information processing in evolution, and the ability to address these issues on laboratory time scales at the molecular level, a number of alternative solutions exist indefinitely 5-7 .
Theoretical Background and Motivation
The unique properties of DNA make it a fundamental building block in the fields of supramolecular chemistry, nanotechnology, nano-circuits, molecular switches, molecular devices, and molecular computing.In addition to information processing, DNA acts as molecular scale heat engine, DNA stores energy, also available on hybridization of complementary strands or hydrolysis of its phosphodiester backbone [6][7][8][9][10] .
"Bio-molecular computers are molecular-scale, programmable, autonomous computing machines in which the input, output, software, and hardware are made of biological molecules.Bio-molecular computers hold the promise of direct computational analysis of biological information in its native bio-molecular form, eschewing its conversion into an electronic representation to advance the nanoscale fabrication techniques for nanobio devices".
Nucleic acids are molecules of choice for both established and emerging nanoscale technologies.These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured [11][12][13][14][15][16] .To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters.Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences using HOL -Higher Order Logic for nanoscale or nano-bio scale systems [15][16] .
As we know, applied mathematics and computer science could provide the needed abstraction, for consolidating the knowledge of bio-molecular systems or bio-inspired systems.Computer and bio-molecular systems both start from a smaller set of elementary components from which, layer by layer, as more complex entities are constructed with an ever increasing demanding applications based on sophisticated functions."Nevertheless, the mathematical abstractions, tools and methods used to specify and study computer systems should illuminate our accumulated knowledge about bio-molecular systems.The exceptional ability of DNA to mediate charge transport (CT) is the basis of novel molecular devices and may be exploited by the cell for both redox sensing, signaling or other specified information processing" [8][9][10][11]17,18 .
"Interpreting chemical reactions in terms of nanobio scale interaction is yet another challenge.So far CMOS design and analog emulation of Reaction-Diffusion(R-D) systems have demonstrated the feasibility of mapping chemical dynamics onto silicon architectures.Semiconductor devices based on minority carrier transport may succeed in the upcoming designs of nano-scale R-D processors and single-electron R-D circuits" [15][16][17] .
Inspite of numerous promising preliminary results obtained the in "R-D" computing domain, this particular field still remains an imaginary interdisciplinary art rather than science, most "Reaction-Diffusion" processors are produced on an ad hoc basis without structured top-down approaches, mathematical verification, rigorous methodology, relevant to other domains of advanced computing and computer hardware (this could be nanobiowetware for implementation!)design.It is in this context, we have planned to consider HOL for rigourous analysis.There is a need to develop a coherent theoretical foundations for "Reaction-Diffusion" computing in chemical media or biochemical media, and adapt new computational substrates.As Einstein said," Imagination is more important than domain knowledge" [10][11][12][13][14][15][16]18 .
The connection with nanomachines and nanosystems is very clear and will become more pervasive in the near future.In our view, DNA Computation is exciting for the following reasons [11][12][13][14][15][16][17][18] : • opens the possibility of a simultaneous bootstrapping solution of future computer design, construction and efficient computation.• provides programmable access to nanosystems and the world of molecular biology, extending the reach of computation.• admits complex, efficient and universal algorithms running on dynamically constructed dedicated molecular hardware.• can contribute to our understanding of information flow in evolution and biological construction.• is opening up new formal models of computation, extending our understanding of the limits of computation.
HOL as a simulation tool to develop biomolecular computing systems
HOL (Higher Order Logic) denotes a family of interactive theorem proving systems sharing similar (higherorder) logics and implementation strategies.Systems in this family follow the LCF approach as they are implemented as a library in some programming language.This library implements an abstract data type of proven theorems so that new objects of this type can only be created using the functions in the library which correspond to inference rules in higher-order logic.As long as these functions are correctly implemented, all theorems proven in the system must be valid.In this way, a large system can be built on top of a small trusted kernel.-source wiki and 5, 6 .
"Isabelle is a generic proof assistant.It allows mathematical formulas to be expressed in a formal language and provides tools for proving those formulas in a logical calculus.The main application is the formalization of mathematical proofs and in particular formal verification, which includes proving the correctness of computer hardware or software and proving properties of computer languages and protocols" 5,6 .Author: Nirmal, LapTec, UNESP, Sorocaba, SP, Brazil.
Sources
DNA is considered as an abstraction, defined by a Mathematical String with four chemical bonds -{A,G,C,T} A Simple Lemma is written to compute the DNA Sequence using "A', the rest could be derived easily for bio-molecular computation involving sensing, informatics or other computing tasks.This Template based on HOL syntax is provided to encourage the reader in defining novel chemical formalisms to advance nano-bio computing platforms and devices for bio-molecular computing.
Results and Discussions
In this communication we have focused on the chemical formalisms of nano-bio system using DNA as the modeling element and shown some insights into the nano-bio scale formalisms.Further, we explain how to design and compute a simple bio-molecular sequence using HOL -higher order logic, as discussed in the abstract.A template showing the implementation of HOL syntax is also presented in one of the above sections, so as to acquaint the reader with HOL based design methodology.We are not going into the in-depth details of HOL based concepts as there are plenty of scientific papers already published and available with on line tutorials.Graphical views or flow charts of the design and methodology sequences are depicted in this paper to simplify the process of understanding the bio-chemical formalisms and computational concepts (Figures 1-5).
In the HOL-Template shown above, we discuss DNA as a nanoscale building block or as a bio-chemical tool to implement the nano-bio scale computation.DNA is considered as an abstraction and as a "Mathematical String", to showcase the theoretical model.As it is a known fact that DNA has 4 chemical bonds namely -A,G,C,T., we could further define ``DNA`` as {A,G,C,T} structure.The HOL-Template has a simple lemma and deduction methodology for Chemical bond ``A``,GCT bonds are not described on the basis of lemmas as we leave it as an exercise to the reader.For further understanding a "Reaction-Diffusion" computing processor could be easily deducted by proper sequencing and there by deriving the application.We draw inspiration from Adamatzky 15,16 to advance our research in reaction-diffusion computation.
Considerations of mathematical and chemical computing formalisms for biomolecular systems illustrated via Figures 1-5 as depicted below
Since the computer science community has been developing, different approaches to support writing correct programs on a continuous basis, e.g.abstract interpretation , type systems, model checking and theorem proving.The art of theorem proving is devoted to provide tools to verify the correctness of a program by means of a formal mathematical proof.As large and complex programs necessarily require large and complex proofs, pen-and-paper based proofs become very difficult or even impossible to grasp.For this reason, the proofs are created with the assistance of an interactive or an automated theorem prover, in our case it is "Isabelle System" [1][2][3][4][5][6][7][8][9][10][11]18 .
"Bio-chemical compounds which react are essentially parallel systems as per the existing and derived computing paradigms.Molecules of the same chemical compound will react in different ways at different moments.The high number of concurrent processes and parameters prevent them from being simulated using older methods.Hence novel methods to simulate them are in fact essential".Here, we present a novel method for the rational design of optimized DNA sequences for a wide range of technological applications.The advantages of our new HOL based sequence design concept can be summarized as follows -the sets of contextually essential sequences exhibit extremely narrow ranges of melting temperatures, a requirement that is central to all applications.Furthermore, the mathematical tools of our HOL-based method allow us to impose very complex and detailed requirements on the sequences to be generated.These requirements are then automatically satisfied without an exception in every one of them [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]18 .
"Reaction-Diffusion (R-D) chemical systems are well known now for their unique ability to efficiently solve combinatorial problems with natural parallelism.In R-D processors, both the data and the results of the computation are encoded as concentration profiles of the reagents.The computation per se is performed via the spreading and interaction of wave fronts.The R-D computers are parallel because the chemical medium's micro-volumes update their states simultaneously, and molecules diffuse and react in parallel" 15,16 .For more information on Reaction-Diffusion Computing Systems, we suggest Prof. Adam Adamatzky's website at UWE, Bristol, England, UK.
In our case, we are focusing on DNA -plasma interaction as the R-D Chemical system.We are not discussing the full-scale implementation here, as we intend to show the readers only a methodology to develop chemical formalisms based on HOL.Detailed discussion is beyond the scope of this paper and space constraints.In our view and expectation, we are sure that the readers could easily adapt the methodology and HOL based framework to suit their research according to the situation.Figures 1-5 serve this purpose.
Conclusions with future perspectives
Promising concepts of DNA Computing operates in natural noisy environments, such as in a glass of water or even a simple test tube in a laboratory.It involves and includes an evolvable platform for computation in which the computer construction machinery itself is embedded.Bio-inspired "Embedded Computing", is possible without electrical power in microscopic, error prone and real time environments.Using these mechanisms and technology compatible with our own bio-inspired approach, DNA Computing is linked to molecular construction.These computations may eventually also be employed, to build three dimensional self-organizing partially electronic or more remotely even quantum computers.Moreover, DNA Computing opens computers to a wealth of applications in intelligent manufacturing systems, complex molecular diagnostics and molecular process control.
In future, we intend to focus on DNA based nano-bio computing platforms.For example using binding of DNA to Graphene and their interactions in plasma radiation environments.We are in the process of designing and developing computational tools based on mathematical methods using HOL as a pioneering effort.We hope to achieve remarkable progress in this new approach, to design better nano-bio computing systems using plasma processing technologies.Both thermal and non-thermal plasmas could be used in performing our experiments to check or verify our nanoscale mathematical models and chemical formalisms using HOL.However, we are focusing mainly on non-thermal plasmas and bio-materials interactions at the moment in our R&D efforts.
We, sincerely thank UNESP for providing a conducive environment and encouragement for research of novel concepts.Furthermore, we thank all those who have helped us directly and indirectly in producing the current paper for the POSMAT 2014 conference in Brazil.The authors strictly abide by open source software regulations, where applicable.
Finite sequences of the DNA Material System Using Higher Order Logic Syntax *} theory Seq imports Main begin datatype 'A seq = Empty | Seq 'A "'A seq" fun compute :: "'A seq => 'A seq => 'A seq" where "compute Empty ys = ys" | "compute (Seq A xs) ys = Seq A (compute xs ys)" fun compute :: "'A seq => 'A seq" where "compute Empty = Empty" | "compute (Seq A xs) = compute (compute xs) (Seq A Empty)" lemma compute_A: "compute xs A = xs" by (induct xs) DNA_A lemma compute_G: "Describe the lemma here" by (Write the Rule here/Left as an exercise to the reader) DNA_G lemma compute_C: "Describe the lemma here" by (Write the Rule here/Left as an exercise to the reader) DNA_C lemma compute_T: "Describe the lemma here" by (Write the Rule here/Left as an exercise to the reader) DNA_T end
Figure 1 .
Figure 1.Total overview of the paper.
DNA -is General Sequence Genetic material made up of A,G,C,T.[dsDNA or ssDNA could be used] Plasma -Non thermal.Please see Figure5for simple explanation of concurrent mechanism implementation and algorithm design.
Figure 3 .
Figure 3. Deduction of bio-inspired systems from classical computing systems.
Figure 5 .
Figure 5. Implementation of concurrent systems and their algorithm design. | 3,148.6 | 2014-07-04T00:00:00.000 | [
"Computer Science",
"Chemistry",
"Biology"
] |
Necessary conditions for steep switching in a constant Resistor-Capacitor RCFET
We establish that the phenomenon of transient negative capacitance, conventionally linked to the delay in the response of a domain switching, of a ferroelectric material, and modelled by a non-linear capacitor, can in fact be considered to be more generally applicable to any phenomenon that can be represented by an RC-equivalent circuit. We demonstrate the conditions for sub-60 mV/dec switching in an RC-FET, even if the R and C were constant along both forward and backward sweeps. For semiconductor charge Qch, we show that the necessary condition for sub-60 mV/dec switching (dQch)/(dΨs) = (q/(kBT))Qch, where Ψs is the surface potential, is possible only if Qch > 0 (i.e. when the transistor is ON) during the backward sweep. This insight contributes further understanding on the causes of hysteresis in commonly used SPICE models of FE-FETs.
Introduction
Current research in negative capacitance phenomena is focussed on attempts to eliminate hysteresis in negative capacitance FETs (NCFETs), whilst achieving Steep Subthreshold (SS) characteristics well below 60 mV/decade. Ferroelectric (FE) materials such as HfZrO are fully compatible with CMOS and their ease of integration makes them a promising technology to continue voltage scaling below the 5 nm technology node [1]. The sub-60mV∕dec behaviour observed in NCFETs can result from the following mechanisms: (1) Negative permittivity or the S shaped behaviour in a single-domain FE [2] under quasi-static conditions as governed by the Landau-Khalatnikov (LK) equation [3], (2) transient negative capacitance in a multi-domain FE or even a paraelectric [4] as represented historically by the Miller model [5][6][7] via a non-negative series R-C circuit. In fact, even organic or solid-electrolyte FETs, with a redox mechanism in the gate insulator [8] have been demonstrated with sub-60 mV/dec behaviour by us in [9]. Similar to FEFET, these devices can also be represented by an R-C circuit. 3) In practice transfer characteristics of FEFET are always measured in non-quasi-static conditions with a finite scan rate. In this case, the sub-threshold slope depends upon both intrinsic negative permittivity as well as transient negative capacitance as a consequence of the scan rate. Therefore, an FEFET is a combination of both cases (1) and (2). FETs utilising negative permittivity promise hysteresis-free sub-60 Mv/dec [10,11], that is typically observed at smaller scan rate or frequency of gate bias sweep, only if the negative capacitance is stabilised. However, if the negative capacitance is not stabilised, these FETs can still exhibit sub-60 mV/dec in both directions of gate bias sweep, albeit with a hysteresis in the transfer characteristics [10,12], irrespective of how slowly the gate bias is scanned. As the scan rate of gate bias is increased, the transient negative capacitance also starts to play a role because of the finite time it takes for the polarisation to switch that gives rise to a resistor in series with the capacitor of FE. This interplay of the two mechanisms typically results in improvement in SS during the backward sweep but a degradation during the forward sweep and eventually causes it to become greater than 60mV∕dec at sufficiently high frequency of gate bias [9]. In case 2, the sub-60mV∕dec arising purely from transient negative capacitance is often accompanied by a counterclockwise hysteresis in the transient transfer characteristics unless it has been offset by trap-induced hysteresis in the clockwise direction [6].
To achieve hysteresis-free operation, the steep switching behaviour arising from a purely negative permittivity in a ferroelectric material is required to be separated from the one that stems from transient negative capacitance as expressed by the Miller model. The phenomenon of transient negative capacitance, despite having completely disparate origins in the case of multi-domain as opposed to single-domain ferroelectric, remains closely interlinked with negative capacitance. The single-domain regime, which is governed by the LK equation, can be expressed by a series R-C circuit in parallel with an oxide capacitor C ox , where C is considered non-linear and can attain negative values for a certain range of polarisation or can even be positive to yield steep switching [13]. In contrast, the multi-domain approximation is modelled by the Kolmogorov-Avrami-Ishibashi (KAI) model, which utilises a time-dependent polarisation [14] or the Miller model [15], which akin to the LK framework, can be equivalently described as a series R-C circuit. In the Miller model, where the gate insulator can be expressed as an R-C circuit, the C remains strictly greater than 0. During the transient sweep, the delay introduced by the R-C circuit leads to scenarios where a change in the surface potential of the semiconductor channel becomes greater than that of the gate bias, thus resulting in a body factor of less than 1. Equivalently, the voltage across the gate insulator changes in a direction that is opposite to both the applied gate bias and the surface potential of the semiconductor channel to accommodate this amplification. Since the surface potential directly affects the charge density in the channel, we can also say that the voltage across the insulator changes in the direction opposite to the change in the charge density in the channel, thus producing a transient negative capacitance. This phenomenon emerges purely from a delay introduced within the gate insulator. In different variations of such models at least one of the elements between R and C is always considered non-linear (with respect to polarisation) [13] to help explain the steep switching behaviour.
Here we establish that the phenomenon of transient negative capacitance can be considered more general than reported earlier. We demonstrate the conditions for sub-60 mV/dec switching in an RC-FET, even if both the R and C are constant.
Methodology
A schematic of a typical RC-FET gate stack is shown in Fig. 1a. In addition to an oxide capacitor C ox , the equivalent circuit of a gate dielectric also contains another branch with a series R-C circuit. Similar to a conventional n-MOSFET, an application of gate voltage V GS induces a sheet charge density Q ch of mobile carriers in the channel. When a voltage V DS is applied across its drain and source terminals, we observe a current I DS , owing to the presence of these mobile carriers in the channel.
To calculate the current and the transfer characteristics, we solve the equation of the R-C circuit of the gate dielectric in conjunction with the Poisson solver in the semiconductor channel, in a self-consistent manner, as illustrated in Fig. 1b. Q ch consists of two terms Q ′ ch and Q ′ that are contributed by the two branches of the equivalent circuit in gate oxide, consisting of the elements R-C in series and C ox (see Fig. 1 (a)). The gate voltage V GS is dropped across the gate oxide V ox , work function between the gate metal and the semiconductor Φ ms and the surface potential Ψ s at the semiconductor channel as Poisson solver for the channel takes Ψ s as input and produces Q ch as output, given the energy band parameters of the semiconductor.
The simulations in this work are generated for fixed values of the oxide and semiconductor parameters,R , C , and C ox as 6.26MΩ ⋅ cm 2 , 3.19 F∕cm 2 , and 67nF∕cm 2 , derived from a Solid Electrolyte FET that shows steep switching via a redox mechanism in the insulator [9]. Nevertheless, the results can be generalised to any other values, but would apply under a different range of scan rate of the gate voltage, as will be proven by deriving the conditions for the body factor ( m ) to become less than unity.
Results and discussion
The simulated transfer characteristics of the RC-FET from the self-consistent model for different sweep rates of the gate voltage are plotted in Fig. 2. At extremely low scan rate (quasistatic), any delay introduced by the series R-C branch becomes inconsequential and a large capacitor C ≫ C ox in this branch induces a very high charge in the channel, thereby producing the highest possible I DS at any given V GS in the forward sweep. During the backward sweep the I DS simply retraces its (1) V GS = ms + V ox + Ψ s Fig. 1 a Schematic of the gate stack of an RC-FET. b Self-consistent solution of the gate insulator equivalent circuit, with the Poisson solver within the semiconductor channel value, owing to almost no delay in the discharging of C , at this extremely slow scan rate.
As the scan rate of the gate voltage is increased, a delay in the R-C circuit results in a counter-clockwise hysteresis, with the subthreshold slopes during the forward and backward sweeps becoming gentler and steeper, respectively. This trend continues up to a scan rate of 1.0V∕s , where the counter-clockwise hysteresis becomes the widest, whilst the subthreshold slope in the backward sweep falls below sub-60mV∕dec . For the scan rate beyond this point, the delay introduced by the R-C becomes so large that very little charging or discharging of C in this branch takes place, leading to a decline in the width of the hysteresis and the subthreshold slope in the backward sweep. At the maximum scan rate of 460V∕s , the delay in the R-C becomes so large that this branch almost becomes non-responsive to the changes in gate voltage, and most of the charge in the semiconductor is now induced by C ox . Thus, the coupling of the gate to the semiconductor reduces to that of a typical MOS capacitor, without any sub-60mV∕dec switching. Since these transfer characteristics correspond to a slow-switching SE-FET, the transfer characteristics reduce to that of a MOSFET at relatively smaller scan rate owing to a large R in the mega-ohm range. If, however, R is reduced to a few ohms for instance, the scan rate will also be boosted by 6 orders of magnitude.
To understand the behaviour of the subthreshold slope in the forward and backward sweeps, the gate dielectric of the RC-FET is first approximated at its limits of low and high scan rates. Following Fig. 1 (a), the mobile sheet charge density in the channel Q ch can be described as the sum of the charge densities contributed by the two branches of the equivalent circuit in the gate oxide: The voltage drop across the gate oxide V ox can be described as: At low scan rate, dV ox ∕dt → 0 , taking the derivative of Eq. (2) and substituting dV ox ∕dt = 0, Since we assume C ≫ C ox , the charge density contributed by the series R-C branch in the limit of a low scan rate is much larger than that in the branch containing C ox . Thus Eq. (5) can be further simplified to Hence, at low scan rate, the gate oxide circuit behaves simply as a series R-C circuit, having a negligible influence from the parallel capacitor C ox . Similarly, to derive an expression at a high scan rate ( dV ox ∕dt → ∞ ), we first take the derivative of Eq. (2) with respect to V ox to yield Substituting dQ � ch ∕dt from Eq. (3), and Q ′ ch from Eq. (2) this results in Finally, in the limit of dV ox ∕dt → ∞ , the above reduces to Hence in the limit of a high scan rate, the behaviour of the gate oxide reduces to that of an oxide capacitor, thus confirming the behaviour observed in Fig. 2. at a scan rate of 460V∕s.
To understand the mechanism of steep subthreshold switching, we next investigate how the behaviour of the coupling between the gate and the semiconductor varies with scan rates, in the limit where the scan rate remains so low that V ox can be described by Eq. (6). Differentiating In the subthreshold regime, the dependence between Q ch and Ψ s that is governed by the Poisson equation (see Fig. 1 (b)), can be simplified to an exponential dependence as where Q 0 and k B are the intrinsic charge density of the semiconductor and Boltzmann constant, respectively. Differentiating Eq. (11) with respect to Ψ s and substituting the result for dQ ch ∕dΨ s in Eq. (10) gives: In the first term on the right-hand side, we have leveraged the chain rule to first write dQ ch ∕dt as [ dQ ch ∕dΨ s dΨ s ∕dt ] before applying the differentiation with respect to Ψ s . We have also ignored the contribution from the term d 2 Ψ s ∕dΨ s dt in the limit of a small scan rate. For the body factor m , to become less than unity, we must have m = dV GS ∕dΨ s < 1 or dV ox ∕dΨ s < 0 , following Eq. (1). Substituting this into Eq. (12) results in Again, for the body factor m = dV GS ∕dΨ s to be less than unity we must also have: If Q ch > 0 , we can drop Q ch from both the sides of Eq. (13) without changing the direction of inequality, and utilising Eq. (14), we arrive at Equation (15) represents the condition for the body factor to be less than unity when Q ch > 0 i.e. when the device is in the ON state. In the above equation, since dV GS ∕dt is negative, it signifies that Eq. (15) is only applicable during the backward sweep. Equation (13) also presents another condition when Q ch < 0 , i.e. when the device is in OFF state. With Q ch < 0 , dropping Q ch from both the sides results in flipping of the inequality sign, thereby leading to the desired expression: Since the device remains OFF during the above condition, the transfer characteristics show no steep switching, even though the body factor is less than unity [3]. Figures 3a and b show the behaviour of V ox with respect to V GS for different scan rates during forward and backward sweeps, respectively. At extremely small scan rate of 0.001V∕s , the slope dV ox ∕dV GS remains greater than 0, throughout in both forward and backward sweeps, hence no steep switching is observed, as confirmed in Fig. 3. As the scan rate is increased, the regions with negative slopes start to emerge in both forward and backward sweeps, as seen for the scan rate of 0.2 and 1.0 V∕s , responsible for driving the body factor below unity.
The body factor and the minimum subthreshold swing SS min as a function of scan rate are plotted in Figs. 4 (a) and (b), respectively. The body factor shows a decline as the scan rate increases. During the backward sweep, that is dV GS ∕dt < 0 , as soon as the scan rate | | dV GS ∕dt | | increases beyond 0.0013V∕s the body factor becomes less than unity,
Conclusion
The gate insulator of many steep switching devices, such as multi-domain FE-FETs, SE-FETs, and other organic FETs can often be reduced to an equivalent series R-C circuit.
Here, we present a generic RC-FET device, with its gate insulator modelled as a series R-C circuit. We have shown that the transfer characteristics of such devices show sub-60mV∕dec behaviour in the backward sweep and derive the conditions to explain this behaviour. Our results and the derived conditions conclusively establish that despite a promise to achieve a body factor below unity in both forward and backward sweeps, such devices that are affected by transient phenomena can only show sub-60mV∕dec of SS during the backward sweep. We believe that our results will help distinguish the sub-60mV∕dec behaviour originating either from negative capacitance or other redox mechanisms, resembling a series R-C circuit. Hence for the continuation of the supply voltage scaling, future research should focus on sub-60mV∕dec behaviour in the forward sweep, as this cannot be facilitated by the undesirable transient negative capacitance.
Funding This study was supported by UK-India Education and Research Initiative (GB) (Grant No. P436).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Fig. 4
Change in (a) body factor and (b) min subthreshold swing ( SS min ) with scan rate. Whilst body factor becomes less than unity in both forward and backward sweeps, SS min shows a decline only in backward sweep, in agreement with the derived conditions in Eqs. (15) and (16) | 3,999.8 | 2021-08-01T00:00:00.000 | [
"Physics"
] |
An Optimal Teaching and Learning based Optimization with Multi-Key Homomorphic Encryption for Image Security
Due to the drastic rise in multimedia content, digital images have become a major carrier of data. Generally, images are communicated or archived via wireless communication changes, and the significance of data security gets increased. In order to accomplish security, encryption is an effective technique which is used to encrypt the images using secret keys in such a way that it is not readable by the hacker. In this view, this study focuses on the design of Teaching and Learning based Optimization (TLBO) with Multi-Key Homomorphic Encryption (MHE) technique, called MHE-TLBO algorithm. The goal of the MHE-TLBO algorithm is to optimally select multiple keys using TLBO algorithm for encryption and decryption processes. In addition, the MHE-TLBO algorithm has derived a fitness function involving peak signal to noise ratio (PSNR) and thereby ensures the superior quality of the reconstructed image. For validating the security performance of the MHE-TLBO algorithm, a comprehensive result analysis is made and the simulation results ensured the betterment of the MHE-TLBO algorithm interms of different aspects.
Introduction
Recently, with the rapid popularization and promotion of digital communication and network technologies across the globe, digital video based digital images became a significant means for data transmission and storage in the computer network under the fields of military and civil [1,2]. But network security problems have been a significant factor which restricted and plagued the growth of network technologies [3,4]. Particularly, in the data resource of the government department and public, to comprehend the information security in the computer networks are the significant direction and content of the study fields in information security and network security. Amongst other, digital video and digital image became the significant contents of broadcast information in the network with the virtues of its convenience and intuitiveness [5,6]. Hence, the security protections of digital images have attained more interest from each party. Particularly, in the background of increasing serious network security situations over the past few years, data sharing and transmission depending on digital image frequently facing the problem of information tampering, theft, attack, and deletion that have created huge losses to the publishers/owners of digital image [7][8][9]. In the security method of digital data, encryption technique is a more popular method and technique. Encryption techniques can be encrypted by encrypting the new information. When the reliability and security of the encryption technique are relatively large, the security of digital data could be secured [10]. Hence, the study on digital image encryption methods and technology is a significant direction for digital image security. But, encryption system/encryption technology is based largely on the requirement of text encryption. Now, the most popular encryption scheme could not attain better results in the encryption quality and compatibility of digital image encryptions. Even though digital images could be treated as 2D datasets, cryptographic systems which straightforward utilize text encryption methods frequently facing the problem of inefficacy in decryption and encryption, low security, and low practicability [11,12]. Studying an encryption method/cryptographic system appropriate for digital image encryptions is a certain technique for protecting the security of digital images in network environments.
Khan et al. [13] proposed an effective method of making high nonlinear cryptographic substitution boxes as an alternative to algebraic/chaotic construction method. The PSO method is used in the construction of high nonlinear S-box, in the proposed method the first populations are arbitrarily generated, and the location vectors of particles are employed in producing S-box. A hyperchaotic image encryption scheme [14] is presented according to PSO and CA models. First of all, to increase the capability for resisting plaintext attack, the early condition of the hyperchaotic scheme is produced by the hash function values i.e., nearly associated with the plaintext images to be encrypted. Furthermore, the fitness of PSO is the relation coefficients among nearby pixels of images.
Farah et al. [15] proposed a novel image cipher on the basis of diffusion or confusion Shannon property. The presented approach is based on novel enhanced substitution boxes, has been performed using chaotic Jaya optimization approach for generating S-box based on their non-linearity scores. The aim of the optimization method is to contain a bijective matrix using higher non-linearity scores. [16] studied an image encryption method according to MOPSO method, 1D Logistic maps, and DNA encoding sequences. Initially, the primary objective of this work includes the subkey sequences elected with PSO method, hash values of the shuffle mark bit, and plaintext images. Generate arbitrary DNA mask image by DNA encoding and Logistic map. Later, utilize the block shuffling plaintext DNA encoding sequences for operating an encryption method.
In [17], hyperchaotic maps are enhanced by a multi-objective evolution optimization method. The DLS-MO method is employed for obtaining the optimum variables of an encryption factor and hyperchaotic map. Next, with an optimum parameter, a hyperchaotic map develops the secret key. This secret key is later employed for performing diffusion and permutation on a plain image for developing the encrypted image. In [18], a novel medicinal image encryption method integrating GSAPSO and MQC systems is presented for obtaining improved security performances. First, an enhanced MQC method is employed for generating is used key streams. Later, the cross operation and election of GA method are employed for processing the plaintext images. The optimum sequence created using SA method is applied for scrambling. While the PSO method is presented for the SA method. The early temperature is fixed based on the best fitness values of the primary population.
In [19], a hybrid image encryption technique was presented depends on chaos and GA. The encryption method includes 3 major phases: improvement, confusion, and diffusion stages with a GA method. Initially, Chen's chaotic map is employed in the confusion stage to make scrambled images with shuffling plain image pixels, and in diffusion phase, Logistic Sine map alters this pixel's gray level value. It creates few encrypted images that are deliberated as the primary population for the GA method. Later, with GA method, the encrypted image is enhanced.
This study focuses on the design of Teaching and Learning based Optimization (TLBO) with Multi-Key Homomorphic Encryption (MHE) technique, called MHE-TLBO algorithm. The goal of the MHE-TLBO algorithm is to optimally select multiple keys using TLBO algorithm for encryption and decryption processes. In addition, the MHE-TLBO algorithm has derived a fitness function involving peak signal to noise ratio (PSNR) and thereby ensures the superior quality of the reconstructed image. For validating the security performance of the MHE-TLBO algorithm, a comprehensive result analysis is made interms of different measures. security procedures. Semantically secured homomorphic public key encrypted techniques were focal cryptographic apparatus to any protected multiparty computation problems. The property of homomorphic has been valuable for building a secure model with high-security data recovery plan. These encryption structures are employed for performing tasks utilizing encoder data without significant the private key (with no decryption), for instance, the customer is a vital holder of the confidential key [20]. The homomorphic assessment technique produces as to account the polynomial many cipher images that are encrypted in N keys, composed of the connecting assessment key, and deliver the cipher images. In multiple encryptions were the only manner near altering on a unique message as to confused shape with carrying out encrypted to the number of times, also with implementing similar or distinctive technique processes. It could be demonstrated as cascade encryption, cascade ciphering, together with several encryptions. To examine model, encrypt as well as decrypt are carried out with several keys. The presented MHE considers 3 stages like multiple key generations, optimum key determination, and encrypt/decrypt approach [21].
Fig. 1. Block diagram of MHE-TLBO model
A key has been employed for encrypting/decrypting whatever data has been encrypted or decrypted. This technique was utilized to encoded as well as decoded keys and the compared images utilizing symmetric keys; both privacy and trust value security are provided. The private key (prk) and relative public key(puk) are in several key sets. The key equal was employed with asymmetric key technique; at this point, several keys K = {K1, K2, ..… Kn} was created to MHE. In specific cases, the keys are arbitrarily created utilizing a Random Number Generator. In order to select the optimum key in several keys, TLBO technique was implemented.
The TLBO technique was simulated in the knowledge transmission amongst the teacher as well as students from the educational periods [22]. Assume that 2 teachers, T1 and T2, teach a subject with similar data to an identical merit level learner from 2 dissimilar classes. Curves 1 and 2 demonstrate the marks reached as the learner taught by teacher T1 and T2 correspondingly. The normal distribution has been regarded as to obtained marks, however, it should be skewness. Normal distribution was demonstrated in Eq. (1).
where σ2 implies the variance, μ refers the mean and x is some value to that the normal distribution function was required. Curve 2 referred to optimum efficiency related to curve 1 and so it can be stated that T2 is higher than T1 with respect to teaching. An important difference amongst the efficiency was measured by calculating the mean amongst them (M2 for Curve-2 and M1 for Curve-1), for instance, an excellent teacher attains optimum mean to the outcomes of learner. The learner also learns to interface among themselves and it also supports improving its outcomes. In the above teaching procedure, the mathematical process was complete for optimizing the unconstrained non-linear continuous function, and so planning an effectual optimized manner called as TLBO technique. There are 2 essential levels from Where TF signifies the teaching issue that calculates the value of mean to be changed, and ri implies the arbitrary number from range of 0 and 1. The value of TF is one/two that is yet over a heuristic step and obvious in an arbitrary approach with similar probabilities as TF= round
Conclusion
This study has designed a new MHE-TLBO algorithm to ensure security in image transmission. The goal of the MHE-TLBO algorithm is to optimally select multiple keys using TLBO algorithm for encryption and decryption processes. In addition, the MHE-TLBO algorithm has derived a fitness function involving peak signal to noise ratio (PSNR) and thereby ensures the superior quality of the reconstructed image. | 2,388.4 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Conserved Quantities and the Algebra of Braid Excitations in Quantum Gravity
We derive conservation laws from interactions of braid-like excitations of embedded framed spin networks in Quantum Gravity. We also demonstrate that the set of stable braid-like excitations form a noncommutative algebra under braid interaction, in which the set of actively-interacting braids is a subalgebra.
Introduction
Recently, there has been a significant amount of work done towards a quantum theory of gravity with matter as topological invariants [1,2,13,3,8,10,12,11]. [1,2,13] work on framed three-valent spin networks present in models related to Loop Quantum Gravity with non-zero cosmological constant [5,6], in which the topological invariants of ribbon braids are able to detect chirality and code chiral conservation laws. However, the results of this approach have a serious limitation in the sense that there is no dynamics of the conserved quantities [2].
To resolve this limitation, a new approach based on embedded four-valent spin networks is proposed in [3], and is shown to have dynamics built in by means of the so-called dual Pachner moves [8]. Here the four-valent spin networks can be understood as those naturally occur in spin foam models [7], or in a more generic context as the original proposal of spin networks put forward by Penrose [9], plus embedding.
The dynamical objects found by the new approach are three-strand braids, each of which is formed by three common edges of two adjacent nodes of the embedded fourvalent spin network. The stable three-strand braids, under certain stability condition, are local excitations [8,14]. Among all stable braids, there is a small class of braids which are able to propagate on the spin network. The propagation of these braids are chiral, in the sense that some braids can only propagate to their left with respect to the local subgraph containing the braids, while some only propagate to their right and some do both [3,8]. There is another small class of braids, the actively-interacting braids (hereafter called "active braids" for short); each is two-way propagating and is able to merge with its neighboring braid when the interaction condition is met [8]. In the sequel, braids that are not active are called passive, including stationary braids, i.e. those do not even propagate. [3,8] are based on a graphic calculus developed therein. However, although the graphic calculus has its own advantages -in particular in describing, e.g. the full procedure of the propagation a braid, it is not very convenient for finding conserved quantities of a braid which are useful to characterize the braid as a matter-like local excitation. In view of this, [10] proposed an algebraic notation of the active braids and derived conserved quantities by means of the new notation.
To these ends, in this paper, we generalize the algebraic notation in [10] to the case of generic three-strand braids. Within this notation, the algebraic equivalence moves are defined and the quantities conserved under these are identified. Finally the algebra of interactions between active braids and passive braids is discussed. This leads to the following results: 1. There exist conserved quantities under interactions and we are able to show the form of these conservation laws.
2. Precise algebraic forms of braid interactions are presented.
3. The set of all stable braids form an algebra under braid interaction, in which the set of all active braids is the subalgebra. 4. This algebra is noncommutative due to the fact that the left and right interactions of an active braid onto another braid are not the same in general. Conditions of commutative interactions are explicitly given. 5. Asymmetric interactions can be related by discrete transformation, such as P, T, CP, and CT.
An immediate application of these results is realized in a companion paper [11] which discovers the C, P, and T transformations of braids by means of conserved quantities found in [10] and in this paper.
Notation
We will extend the algebraic notation of active braids to the general case, namely to propagating braids and in fact all braids. However, for illustrative purposes we keep the graphic notation wherever necessary. We adopt the graphical notation we proposed in [3,8]. A generic 3-strand braid is shown in Fig. 1(a), while a concrete example is depicted in Fig. 1(b). More precisely, what are shown in Fig. 1 are braid diagrams as projections of the true 3-strand braids embedded in a topological three manifold. Each spin network can be embedded in various ways, some of which are diffeomorphic to each other. The projection of a specific embedding of a braid is called a braid diagram; many braid diagrams are equivalent and belong to the same equivalence class, in the sense that they correspond to the same braid and can be transformed into each other by equivalence moves [3]. Thus a braid refers to the whole equivalence class of its braid diagrams. However, one can choose a braid diagram of an equivalence class as the representative of the class, we therefore will not distinguish a braid from a braid diagram in the sequel unless an ambiguity arises. Besides, a braid always means a 3-strand braid.
It is important to emphasize the choice of the representative of an equivalence class of braid diagrams. In [8], each equivalence class of braid diagrams is represented by its unique element which has zero external twists (see Fig. 1(b) for an example). This choice makes the propagation and interaction of braids defined in [8] easier to handle. However, there are three types of stable braids, viz active braids, propagating braids, and stationary braids [8,12]. Propagating braids are able to exchange places with their adjacent substructures in the graph under the local dynamical moves, whereas the stationary braids cannot propagate in the way the propagating braids do. These braids are in most cases represented by braid diagrams of zero external twists.
On the other hand, as pointed out in [8,10], the active braids, each of which can propagate and can interact onto any other braid in the sense that it can merge with another adjacent braid to form a new braid as long as the interaction condition is met, are happen to be both completely left-and right-reducible, i.e. such a braid is always equivalent to a trivial braid diagram with possibly twists on its three strands and two external edges. Thus Figure 1: (a) is a generic 3-strand braid diagram formed by the three common edges of two end-nodes. S l and S r are the states of the left and right end-nodes respectively, taking values in + or −. X represents a sequence of crossings, from left to right, formed by the three strands between the two nodes. T a , T b , and T c are the internal twists respectively on the three strands from top to bottom, on the left of X. T l and T r , called external twists, are respectively on the two external edges e l and e r . All twists are valued in Z in units of π/3 [3]. (b) is a concrete example of a braid diagram, in which the left end-node is in the '+' state while the right end-node is in the '−' state.
it is more convenient to represent each of these braids by a trivial braid diagram (which is not unique) of the corresponding equivalence class. In fact, [10] chose this representation and derived conserved quantities of this type of braids under interaction by introducing an algebraic notation of them and a symbolic way of handling the interactions of them. This trivial representation of active braids is actually a special case of the so-called extremal representation of generic braids [3]. A braid diagram as an extremal representation of a braid is called an extremum of the braid. The name-extremum-manifests per se its meaning: a braid diagram with least number of crossings, among all braid diagrams in the same equivalence class. Therefore, our generalized algebraic notation of a braid should take care of all possible choices of representative of a braid, especially the aforementioned special representations. Note that, the class of propagating braids excludes any active braid, which is also propagating though.
Let us now concentrate on the generic case depicted in Fig. 1(a). Apart from the internal twists, the interior of a braid, which is the region between the two end-nodes and is characterized by the sequence of crossings, satisfies the definition of an ordinary braid, arranged horizontally. We can thus denote the sequence of crossings X by generators of the braid group B 3 . The group B 3 has two generators and their inverses. Since we arrange a braid diagram horizontally, the generator and its inverse formed by the upper two strand of a braid are named u and u −1 respectively, while the one and its inverse formed by the lower two strand of a braid are d and d −1 respectively. This convention is illustrated in Fig. 2. Then for example, the crossing sequence in Fig. 1(b) reads X = u −1 d, from left to right. We also assign an integral value, the crossing number, to each generator, i.e. u = d = 1 and u −1 = d −1 = −1.
For an arbitrary sequence X of order n = |X|, namely the number of crossings, we can write X = x 1 x 2 · · · x i · · · x n , where x i ∈ {u, u −1 , d, d −1 } represents the i-th crossing from the left. Therefore, each x i in X has a two-fold meaning: on the one hand, it is an abstract crossing; on the other hand, it represents an integral value, 1 or −1. When a x i appears in a multiplication it is usually understood as an abstract crossing, while in a summation it is normally an integer. Note that, as generators of the group B 3 , the generators of X obey the following equivalence relations.
We assume in any X, the above equivalence relations have been applied to remove any pair of a crossing and its inverse. For example, the sequence udu −1 d −1 should have been written as udu −1 d −1 = d −1 udd −1 = d −1 u by the first relation above. The crossing sequence X clearly induces a permutation, denoted by σ X , of the three strands of a braid. It is obvious that the induced permutation σ X takes value in S 3 , the permutation group of the set of three elements. In terms of disjoint cycles, More precisely, the three internal twists in a triple (T a , T b , T c ) on the left of the X are permutated by the induced permutation into (T a , is the triple of the internal twists on the right of X. Here σ X is defined to be a left-acting function on the triple of internal twists for two reasons. Firstly, this is a convention of permutation group. Secondly, when there is another crossing sequence, say X ′ , appended to the right of X, which usually happens in interactions of braids, we naturally have (T a , T b , T c )σ X σ X ′ = (T a , T b , T c )σ XX ′ , such that the induced permutation of the newly appended crossings is applied after the action of σ X .
On the other hand, where σ −1 X is the inverse of σ X and is a right-acting function of the triple. One should keep in mind that the indices of internal twists such as T a and T ′ a , a and a ′ are abstract and have no meaning until their values and positions in the triple of internal twists are fixed. So (T a , T b , T c ) = (T d , T e , T f ) means respectively T a = T d , T b = T e , and T c = T f . In the rest of the paper, we will also consider the addition of two triples of twists, i.e. (T a , T b , T c )+(T d , T e , T f ) = (T a +T d , T b +T e , T c +T f ). Therefore, we can denote a generic braid as Fig. 1(a) by Tr , or equally well by Tr . In such a way, which side of the crossing sequence a triple of internal twists is on is transparent. For instance, the braid in Fig. 1 . It is now manifest that a generic braid is characterized by the 8-tuple, {T l , S l , T a , T b , T c , X, S r , T r }. As mentioned before, S l and S r are just signs, + or −, such that −(+) = − and −(−) = +. Hence, for an arbitrary end-node state S, we may use both −S andS for the inverse of S. In fact, this 8-tuple is not completely arbitrary for different type of braids. For a propagating braid B of n crossings represented by the braid diagrams with no external twists we have the following constraints.
2. The triple (S l , X, S r ) is not arbitrary. If B is (left-) right-propagating, according to [3,8], B must be (left-) right-reducible, in particular its first crossing on the (left) right can be eliminated by the equivalence move, a π/3 rotation which also flips the (left) right end-node. That is, letting X = x 1 x 2 · · · x n−1 x n , then under a π/3 rotation on the (left) right end-node, the triple (S l , X, S r ) becomes ((S l , x 2 · · · x n−1 x n , S r )) (S l , x 1 x 2 · · · x n−1 ,S r ).
3. The triple (T a , T b , T c ) is not arbitrary; however, the general pattern of them, ensuring the propagation of B, has not yet been found and is under investigation. But the algebra formulated and the conserved quantities to be found in this paper may turn out to be helpful to resolve this problem.
Any braid whose characterizing 8-tuple violates the above constraints is not propagating It is useful in certain situation to represent a propagating braid by its extrema in the corresponding equivalent class, which are the braid diagrams of the least number of crossings, obtained by rotations of the unique representative with zero external twists, getting rid of the reducible crossings [3]. However, we would like to leave the discussion of this representation to the next section after we defined rotations symbolically.
For an active braid, we choose to represent it by its extrema, i.e. its trivial braid diagrams with external twists. This has actually been carried out in [10], we thus will not repeat any detail here. It is good to see our notation of the generic case reduces to that of the active braids defined in [10]. For active braids represented by trivial diagrams, the crossing sequence is trivial and hence the induced permutation is the identity, i.e. σ X = 1. There is no difference between the triple of internal twists on the left of X and the one on the right. Moreover, we have S l = S r = S in this case [10]. As a result, the generic notation uniquely boils down to Tr , which is the very notation introduced in [10] for active braids in their trivial representations.
Algebra of equivalence moves: symmetries and relations
Since an active braid can always be reduced to trivial braids with twists, it is sufficient to discuss the algebra of simultaneous rotations for these trivial braids. In [10], we have found general effects of simultaneous rotations on trivial braids, especially conserved quantities under this class of equivalence moves. For generic braids, however, we need to consider more generalized rotations of a braid, denoted by R m,n with m, n ∈ Z, which is the combination of an mπ/3 rotation on the left end-node and an nπ/3 one of the right end-node of the braid. Let us record the algebraic form of such a rotation on a generic braid, and then explain it, On the RHS of Eq. 3, the original end-node states, S l and S r , become (−) m S l and (−) n S r respectively. This is because by [3], a π/3 rotation of a node always flips the state of the node once, which means a mπ/3 rotation should flip the state of a node m times. Also according to [3], a rotation of the left (right) end-node of a braid creates a crossing sequence, appended to the left (right) of the original crossing sequence of the braid. In Eq. 3, the newly-generated sequence on the left is denoted by a function X l (S l , m), depending on the original left end-node state and the amount of rotation, m. Likewise, the new crossing sequence on the right is denoted by the function X r (S r , n). We will elaborate these two functions shortly. As a consequence, the induced permutation by the crossing sequence changes accordingly, from σ X to σ X l (S l ,m)XXr(Sr,n) .
In addition, the left triple of internal twists is affected by the rotation of the left endnode, which induces a permutation P S l m on the triple, determined by the original end-node state and the amount of rotation m. This function, which obviously takes its value in the group S 3 shown in Eq. 2, is the same as the that induced by a simultaneous rotation on active braids, defined in [10]. One may wonder why the similar permutation induced by the rotation on the right end-node does not appear in Eq. 3. This is due to the advantage of our notation which needs the triple of internal twists on one side, while the triple on the other side is taken care of by the permutation σ. If one indeed wants to have the triple of internal twists beside the right end-node explicit, one can use the alternative form of the rotation as follows instead.
Finally, the common increment of −m − n of all internal twists, and the changes of the two external twists under the rotation R m,n in Eq. 3 and Eq. 4 are simple effects of the rotation [3].
We now explain more about these functions. Since the permutations P S m here are the same as those defined in [10], we adopt the following lemma from [10] which states the general relations they satisfy; a proof of this lemma can be found in the reference.
where n ∈ Z.
Besides, the equations below are easy to derive [10]; they are listed here for possible future use.
where n ∈ Z.
According to the graphic definitions of rotations in [3], we found that X l (S, m) and X r (S, n) have the following general algebraic forms.
where n, m ∈ Z. If an exponent in Eq. 7 is positive, it means, for example, (ud) 2 = udud. We utilize a definition in [11], which is, for a crossing sequence Given this, the meaning of the negative exponents in Eq. 7 is clear.
It is obvious that the number of crossings of either X l (S l , m) or X r (S r , m) does not depend on the end-node state, neither does the sum of crossing numbers of X l (S l , m) or X r (S r , m), namely where, x i ∈ X l (+, m), y i ∈ X l (−, m), z i ∈ X r (+, m), and w i ∈ X r (−, m). In addition, there is a useful relation between X l (S, m) and X r (S, n), as stated in the following Lemma. Proof. For m even, for m odd, In conclusion, for any S and m, we have The rotation R m,n is actually a generalization of the simultaneous rotation R n,−n , defined in [10] as acting on actively-interacting braids in their extremal representations, each of which is a trivial braid diagram with two identical end-node states. In this case S l = S r = S, and X ≡ I, indicating σ X = 1. Therefore, for consistency, our general rotation R m,n should reduce to the simultaneous rotation R m,−m on these braids, if we set n = −m. This is indeed so because which is the very simultaneous rotation defined in [10].
With these ingredients, we can proceed to find out conserved quantities of a generic braid under general equivalence moves, R m,n . One can see that unlike trivial braids under simultaneous rotations in [10], T l + T r and the triple (T a , T b , T c ) are no longer conserved here for a generic rotation, only a combination of them with sum of crossing numbers, x i is conserved under these general equivalence moves. Besides, the conserved quantities S 2 is generalized to the effective state χ = (−) |X| S l S r for generic braids. These results are summarized as the following Lemma.
Lemma 3.
Under a general rotation R m,n , a braid's effective twist number, Θ, and its effective state, χ, are conserved.
Proof. By Eq. 3, a general rotation R m,n can transform a generic braid where y i ∈ X l (S l , m) and z i ∈ X r (S r , n) and Nonetheless, by Eq. 7 and Eq. 8, we obtain This establishes the proof.
As a direct consequence of Lemma 3, we have the following Theorem, which provides a character for actively-interacting braids. Theorem 1. The effective state of any actively-interacting braids is χ = 1, and any braid with effective state χ = −1 must be passive.
Proof. The proof of this theorem is very simple. Any actively-interacting braid has a trivial representation, whose effective state is χ ≡ S 2 = 1. Hence, according to Lemma 3, the effective state of any actively-interacting braid must be χ = 1. This implies, on the other hand, any stable braid with χ = −1 is never actively-interacting, and is thus passive.
All results we have obtained so far are valid for braids in any representation. Now we would like to consider the extremal representation of a braid, and try to find out how equivalence moves act on braids in this representation in particular, and the corresponding conserved quantities. The case of actively-interacting braids are investigated in [10], and it turns out that there are infinite number of extrema which are trivial braids related to each other by simultaneous rotations. Moreover, T l + T r , the triple (T a , T b , T c ) up to permutation, and S 2 are conserved under these simultaneous rotations. For passive braids, the situation is more involved for that their extrema are not trivial braids and that generic rotations (including generic simultaneous rotations) increase the number of crossings of an extremum. However, there are also infinite number of extrema of a passive braid due to the following Lemma. Proof. By the definition of extremal, all extrema of the same braid have the same number of crossings. Thus we only need to prove that R 3k,−3k , k ∈ Z on an extremum preserves its number of crossings, then the resultant representation of braid must also be an extremum; otherwise the braid diagram undergoing the rotation should not be an extremum in the first place. Additionally, since R 3k,−3k = R |k| ±3,∓3 by [10], where the ± and ∓ depend on the sign of k. It is sufficient to prove the case of k = ±1, which are just simultaneous π rotations. We now prove that simultaneous π-rotations of a braid take the braid's crossing sequence, X = x 1 · · · x i · · · x N , N ∈ N, toX =x 1 · · ·x i · · ·x N , N ∈ N, wherē and thus keep the number of crossings of the braid invariant. For one-crossing braids with arbitrary S l and S r , it is straightforward to see that X l (S l , ±3) We assume that the this is true for any braid with up to N ∈ N crossings, with arbitrary end-node states, namely Hence, for any braid with N + 1 crossings and end-node states, say S ′ l and S ′ r , Xr(S,∓3)X l (S,±3)=I where we inserted a pair of crossing sequences, X r (S, ±3) and X l (S, ±3) with arbitrary S, whose product is trivial by Lemma 2, between the N-th and (N +1)-th crossing. Note that these two crossing sequences are not created by real rotations R 0,±3 or R ±3,0 , but rather only equal to respectively the crossing sequences created by these two rotations. Hence, according to the assumption that the claim is valid for an N-crossing braid with arbitrary end-node states, and the fact of the validity for all one-crossing braids, we arrive at Bearing in mind that |X| = |X|, therefore by induction, simultaneous rotations, R ±3,∓3 take a generic braid with crossing sequence X to an equivalent braid with sequenceX, which does not change the number of crossings. This certainly indicates that R 3k,−3k with k ∈ Z rotates an extremum to another extremum of the same braid, which validates the proof. Furthermore, by Eq. 3, Eq. 4, and Eq. 5, and with a convenient redefinition: X =f (X), we can pin down the algebraic form of the action of a R 3k,−3k , k ∈ Z on generic braids, or equivalently, where (1, 3) k is the permutation induced by R 3k,−3k , andf k (X) = X for k even, whilē f k (X) =f (X) for k odd.
Similar to the case of actively-interacting braids, there are conserved quantities under rotations in the form of R 3k,−3k , which are shown in the Lemma below. Proof. From Eq. 9 and Eq. 10, it is obvious that a rotation of the form R 3k,−3k , k ∈ Z takes T l + T r to T l + 3k + T r − 3k = T l + T r , turns the triple These conserved quantities imply that the previously defined Θ is also a conserved quantity under this class of equivalence moves, which is expected because in Lemma 3 we have shown that Θ is conserved under any rotations. That is, it is the same for any representation of a braid. Now that we have found conserved quantities under rotations of the form R 3k,−3k , k ∈ Z. The only issue left behind is that we have not proven that the simultaneous rotations of multiple of π are the only possible class of rotations under which the set of all extrema of a braid is closed. If this is true, then each conserved quantity in Lemma 5 is identical for all extrema of a braid. There are strong evidences that this is indeed the case; however, we are lack of a rigorous proof. Therefore, we only state this observation as a conjecture.
Conjecture 1.
Any rotation that transforms an extremum to another extremum of the same braid must take the form of R 3k,−3k with k ∈ Z. Assuming this, all extrema of a braid share the same
Algebra of interactions: symmetries and relations
In [10] it is shown that the interaction of any two active braids produces another active braid. Now that we are dealing with not only active braids but also the passive ones, one may ask what the outcome of the interaction of an active braid and a passive braid, say a propagating braid, should be. To answer this question, we need sufficient preparation, divided into the following subsections.
Conserved quantities under interactions
We first repeat in words the interaction condition formulated in [8,10]. This condition demands that one of the two braids, say B 1 and B 2 , under an interaction must be active and that the two adjacent nodes, one of B 1 and the other of B 2 , are either already in or can be rotated to the configuration where they have the same state and share a twist-free edge. The latter requirement is actually the condition of a 2 → 3 Pachner move [8]. The algebraic form of this condition is explicitly given in [10] and is thus not duplicated here but will rather be adopted directly.
Since a braid is in fact an equivalence class of braid diagrams, a convenient choice of the representative of the class is important. Whether a braid is propagating or not is most transparent when the braid is represented by its unique representative which has no external twist. On the other hand, an active braid can always be put in a trivial representation, which simplifies the calculation of interactions. Therefore, in this section any active braid is represented by one of its extrema, i.e. a trivial diagram with external twists, and any braid which does not actively interact is represented by its unique representative. Now when an active braid, B, meets a passive braid, B ′ , say from the left of B ′ (the case where B is on the right of B ′ follows similarly), with the interaction condition fulfilled, what does the resulted braid B +B ′ look like? Here, as in [10] we use a + for the operation of interaction. A special case is that B in its trivial form has no right external twist, and since B ′ is in the representation without external twist, one can directly apply a 2 → 3 move of the B's right end-node and the left end-node of B ′ , then paly with the techniques introduced in [8] to complete the interaction. Let us address this simple case first.
Lemma 6. Given an active braid
Tr , with T r = 0, and a passive braid Proof. As T r = 0 and S = S l , the interaction condition is met and thus no rotation is needed; hence, according to [8], B + B ′ forms a connected sum of B and B ′ , which is, in our algebraic language, where a rotation R −T l ,0 is applied after the connected sum to put the resulted braid in its representative with zero external twist, which induces a permutation P S −T l on the left triple of internal twists, and a crossing sequence X l (S, −T l ), appended to the original X from left.
However, in general the trivial diagram representing an active braid may have external twists on both external edges. If the interaction condition is satisfied when the trivial braid in this case meets a passive braid, a rotation is usually required in order to perform the connected sum algebraically for them to interact. We now deal with this.
where (T l + T r , ·, ·) is the short for (T l + T r , T l + T r , T l + T r ). Proof.
where the simultaneous rotation R Tr,−Tr , is applied to realize the interaction condition in order to do the connected sum, and the rotation R −T l −Tr,0 is exerted such that the final result is in the representative without external twists, which induces a permutation P (−) Tr S −T l −Tr and a crossing sequence X l ((−) Tr S, −T l − T r ) concatenated to X from left. The above equation obviously reduces to Eq. 11 when T r = 0.
Nevertheless, for each active braid, there are infinite number of trivial braid diagrams which are equivalent, in the sense that any two of them are related by a simultaneous rotation R n,−n , n ∈ Z. It is then naturally to ask if the choice of the trivial braid diagram representing a braid equivalence class influences the interaction of the braid and another braid. The answer is "No". The reason is obvious because of the equivalence of the trivial diagrams. However, due to the necessity of realizing of the interaction condition in a concrete calculation of interaction, it is better to formulate this claim explicitly in our new notation as a Lemma. Proof. We prove the case of Tr be a trivial diagram representing an active braid B.
Sr 0 be the passive braid on which B interacts. We assume the interaction condition is satisfied by (−) Tr S = S l . Any other trivial braid, say B n , representing B can be obtained from B 0 by B 0 + B ′ has already been shown in Lemma 7, but we only need Eq. 12 therein, which is the configuration of the two braid after the interaction condition is realized. If we replace B 0 by B n in the interaction, we have which is exactly the same as Eq. 12. That is, if B 0 interacts with B ′ , so does B n , and they give rise to the same result. Likewise, this is also true for the case of B ′ + B. This closes the proof. Now that we established Lemma 8, we may choose to always represent an active braid B by its trivial representative without right (left) external twist, in dealing with the interaction of B onto a passive braid from the left (right), which simplifies the calculation and expression because Lemma 6 directly applies. Moreover, the result of Lemma 6, namely Eq. 11, is identical to the result when the active braid is represented by its unique representative with zero external twists. This again, together with [10], shows that the choice of representative of a braid does not affect the result of the interaction involving the braid, in accordance with [8]. Examples can be found in [8], one just need to cast them in our new symbolic notation.
Equipped with this algebra, we shall prove one of our primary results.
Sr 0 , such that B ′′ = B + B ′ , the effective twist number Θ is an additive conserved quantity, while the effective state χ is a multiplicative conserved quantity, namely Proof. We can readily write down and Hence, according to Eq. 14, we have where the second equality is a result of x j = T l + T r , by Eq. 8. Besides, This theorem demonstrates that the by far discovered two representative-independent conserved quantities, Θ and χ are also conserved under interactions, in the sense that the former is additive while the latter is multiplicative. This is consistent to [10] in which only interactions between active braids are discussed. In particular, χ becomes the S 2 in [10], whose conservation means that the interacting character of the braids is preserved. Furthermore, according to Theorem 1, the multiplicative conservation of χ shows that if the passive braid involved in an interaction has χ = −1, the resulted braid must also has χ = −1, and is thus a passive braid too.
Asymmetry between B + B ′ and B ′ + B
It is important to note that Theorem 2 is also true for the case of interaction where the active braid is on the right of the passive braid. In fact, all the discussion above can be equally well applied to this case. One must then ask a question: does an active braid gives the same result when it interacts on to a passive braid from the left and from the right respectively? The answer is "No" in general. We now discuss this issue by considering an active braid, B, and a passive braid, B ′ .
First of all, even if the interaction condition is met in the case B + B ′ , there is no guarantee that the interaction condition is also satisfied in the case of B ′ +B, which means B ′ + B is simply an impossible interaction. If we assume the interaction condition can be realized in both cases, B + B ′ and B ′ + B still give rise to different results, i.e. inequivalent braids, in general. Let us study the detail.
In the case of B + B ′ , by Lemma 8 we can represent B by its trivial braid diagram without T r , viz B ∼ = B l = S T l [T a , T b , T c ] S 0 , which allows us to use Eq. 11. However, in the case of B ′ + B, we represent B by its trivial diagram without the left external twist, which is obtained by a simultaneous rotation from B l , i.e. B ∼ = B r = R −T l ,T l (B l ) = Sr 0 , the interaction condition in the latter case is S r = (−) T l S. With also S = S l , we conclude that the condition for both B + B ′ and B ′ + B doable is Given this, by a similar calculation as that in Lemma 6, one can find To compare this to B + B ′ , we rewrite Eq. 11 as follows, taking Eq. 18 into account.
It is important to notice that, by Eq. 19 and Eq. 20, both B + B ′ and B ′ + B are in the representative with zero external twists. Since we know such a representative is unique for each equivalence class of braids, there is no way for B+B ′ and B ′ +B to be equivalence to each other. But there are possibilities for B + B ′ and B ′ + B are simply equal. For this to be true, braids B and B ′ are pretty strongly constrained. Firstly, one obviously has to require S l = S r and T l = 2k, k ∈ Z in Eq. 19 and Eq. 20, such that B + B ′ and B ′ + B have the same end-node states. This, together with the interaction condition, also indicates S l = S r = S. Keeping this in mind, one must then demand that B + B ′ and B ′ + B have the same crossing sequence, namely X l (S, −T l )X = XX r (S, −T l ), which can also be written as X l (S, −T l )XX −1 r (S, −T l ) = X. With Lemma 2, this can be put in the form, X l (S, −T l )XX l (S, T l ) = X. Since Eq. 7 reads that X l (S, m) = X r (S, m) for m even, this condition is better expressed as X l (S, −T l )XX r (S, T l ) = X, which now appears to be the requirement that a simultaneous (−T l )-rotation leaves the crossing sequence intact. Although it has been conjectured in the end of Section 3 that only simultaneous rotations of 6k, k ∈ Z are able to achieve this condition, we would like to keep its current general format because the conjectured has not been proved. However, interestingly, we will see an automatic input of 6k, k ∈ Z shortly.
From this point on, three possibilities will arise, each of which leads to a different condition on top of the conditions found above. The first possibility is that we do not put any constraint on the internal twists. Hence, according to Eq. 19 and Eq. 20, such that An immediate consequence is that σ X = 1, which however does not constraint the pattern of X very much. Moreover, as we know that T l is even, then by Eq. 5, P S −T l ≡ 1 if and only if T l = 6j, j ∈ Z. With this T l , Lemma 4 ensures the fulfillment of the condition, X l (S, −T l )XX r (S, T l ) = X, which now appears to be redundant in this case.
The second possibility is to relax the first one a little bit by only requiring σ −1 X P S −T l = P S −T l which only means that σ X = 1 but no criteria for P S −T l . Then we notice that while in Eq. 20 both of the triples (T a , T b , T c ) and (T ′ holds and that P S −T l ≡ 1. Therefore, by simple logic the condition that B + B ′ = B ′ + B, if Conjecture 1 stands, is reduced to: and either σ X = 1 or The discussion above focuses on how B + B ′ is equal to B ′ + B. Nevertheless, since [11] discovered discrete transformations of braids, which are mapped to C, P, T , and their products, and discussed their actions on braid interactions, B + B ′ and B ′ + B may be just related by a discrete transformation. As pointed out in [11], the transformations P, T , CP, and CT swap the two braids undergoing an interaction. That is, for example, P(B + B ′ ) = P(B ′ ) + P(B). If B and B ′ happen to be invariant under P, B ′ + B is then equal to P(B + B ′ ). We will not involve the detailed mathematical formats of these transformations, which can be found in [11]; rather, we list below the conditions for this to be true. (30)
The algebraic structure
With the help of this notation and the algebraic method established on it we are able to show that the set of all stable braids, namely the active braids, propagating braids, and stationary braids, form an algebra under the braid interaction. This algebra is closed. The reason is that any interaction of the type defined in [8] of two stable braids never leads to an instable braid due to the stability condition put forward in [8,14]. In [10] it is demonstrated that an interaction between two active braids always results in another active braid. On the other hand, interactions between active and passive braids turn out to be more complicated and involved. However, provided with all the discussion in previous sections, we can try to answer the question raised at the beginning of Section 4. This question can be first partly answered by the following Theorem.
Theorem 4. The resulted braid of any interaction between an active braid and a passive braid is again passive.
Proof. Recalling Theorem 1 and Theorem 2, since the effective state χ is a multiplicative conserved quantity under interaction and an active braid must have χ = 1, the interaction of an active braid and any passive braid with χ = −1 must leads to a passive braid with χ = −1.
However, this is not a complete proof because a passive braid may also have χ = 1. A full proof can be easily constructed. by contradiction. For this purpose, we need the following facts, extracted from [8], of passive braids: 1. A stationary braid is neither left nor right completely reducible.
2.
A (left-) right-propagating braid is never completely-reducible from its (right) left end-node.
3. A two-way propagating braid is not completely-reducible from either end-node; otherwise it must be both left and right completely-reducible, which makes it an active braid if equipped with appropriate twists. Now let us consider an active braid B in an arbitrary trivial representative and a passive braid B ′ in its unique representative, with the interaction condition met, their interaction, say B + B ′ (the case of B ′ + B, if possible, will follow similarly), results in, by Eq. 13, with internal twists ignored because they are irrelevant, We do not apply a rotation of the left end-node of B + B ′ because things will become less transparent otherwise. Note that according to Eq. 31, at this stage, the two end-nodes of B + B ′ are in the same states respectively as those of B ′ before the interaction, and also that it has the same crossing sequence X as B ′ does. This means the irreducibility of B + B ′ respects that of B ′ .
We know that an active braid must be both left and right completely-reducible. Now that B ′ is passive, it is never completely-reducible from both sides, which means B + B ′ is not either by Eq. 31. Otherwise, B ′ should be two-way completely reducible in the first place, which is contradictory to any basic facts listed above of a passive braid. Therefore, the theorem holds.
Since an interaction of two active braids gives rise to active braids only, as aforementioned, Theorem 4 then immediately entitles the set of active braids a subalgebra of the algebra of stable braids.
We still need to discuss if the interaction of an active braid and a passive propagating (stationary) braid creates a propagating (stationary) braid. The answer is "Not always". To illustrate this, we show two examples.
which is an irreducible braid according to [3], and is thus stationary. This example shows the interaction of an active braid and a propagating braid can result in a stationary braid.
The reason for such a situation to arise is due to the pair cancelation of crossings and the change of the end-node state, as a consequence of the interaction, which is lucid in the example above.
On the other hand, An active braid and a stationary can also produce a propagating braid via their interaction. For this sake, we can use the braid in Eq. 32 as the stationary one and name it B s , and consider an active braid B = − 1 [−1, −1, 1] − 0 . Also by Eq. 11 we obtain which is the very B p in the previous example. Above all, the set of all stable braids form an algebra with interaction as the binary operation, which is associative, of the algebra. Stable braids are local excitations of embedded spin networks which are considered to be basis states describing the fundamental space-time. Consequently, a physical state is usually a superposition of these basis states. It is therefore clear that braid interaction, as the binary operation of the Algebra of stable braids, is bilinear. Within this algebra, the set of active braids behaves as a subalgebra. In addition, due to the asymmetry between left and right interactions elaborated in Section 4.2, this algebra is noncommutative.
Conclusion, discussion and outlook
Conservation laws play a pivotal role in revealing the underlying structure of a physical theory. By means of invariants and conserved quantities we are able to determine how the content of the theory relates to particle physics, or what kind of new mathematical and/or physical inputs are necessary so that the theory is meaningful.
We have generalized the algebraic notation of active braids, proposed in [10], to all our braids, found a set of equivalence relations relating them, and developed conserved quantities associated with these relations. More importantly, by means of this notation we studied the interaction between active braids and passive braids, which leads us to the fact that the set of all stable braids forms a noncommutative algebra, with a subalgebra containing only active braids. From this we have found both additive and multiplicative conserved quantities of braids under interaction. These are not only dynamically conserved but also conserved under the equivalence moves.
A possible next step is to determine which of these conserved quantities may correspond to quantum numbers, together with the results for interactions of braids found in this paper, to fully classify the set of braids. These conserved quantities find an application in a companion paper [11] of us, in which discrete transformations of our braids have been discovered and mapped to charge conjugation, parity, time reversal and their products. The results of interactions also stimulate another work in this direction [12].
The ultimate physical content of our braids cannot be fully understood at this stage. In [13] regarding the braids of 3-valent embedded spin networks, a tentative mapping between the 3-valent braids and Standard Model particles is proposed, with however the absence of dynamics. In the 4-valent case, as also discussed in the companion paper [11], such a direct mapping, if not impossible, is at least obscure at this level of understanding of the braids. A reason is that the dynamics, namely the propagation and interaction of 4-valent braids, strongly constraints the possible set of twists, crossing sequence, and enenode states of a braid for it to be propagating and/or interacting. In addition, the closed form of this constraint is still missing. Consequently, one is not supposed to assign a 4valent braid any topological property just for it to be possibly a Standard Model particle, which is what has been done in the 3-valent case. More study and in particular maybe new mathematical tools are needed to reveal whether the 4-valent can directly correspond to Standard Model particles.
If our 4-valent braids are more fundamental entities than the Standard Model particles are, then what do they correspond to, what do their interactions mean actually, and how do they give rise to Standard Model particles under certain semi-classical limit? These are profound but natural questions to ask. However, our current understanding of 4-valent braids has not provided sufficient knowledge to give an answer. The realm of braids of spin networks is enormous, and a great deal of future work must be done.
For example, in our study we have yet not included spin network labels which are normally representations of gauge groups, or of the quantum groups of the corresponding gauge groups. This may make one misunderstand that the properties of braids are independent of the spin network they live on. This is nevertheless not true. On the one hand, although that a braid is propagating and/or interacting depends on its topological setting, whether it can indeed propagate away from its location and/or interact with its adjacent braids depends on the structure of its neighborhood and hence of the whole spin network it is on. On the other hand, when spin network labels are taken into account, a braid becomes manifestly dependent of its spin network, with only its topological properties unchanged. Braids of the same topology but different set of spin network labels would be considered physically different, though maybe not different particles.
Moreover, with spin network labels, a dynamical move, e.g. a 2 → 3 move, may have a superposition of outcomes in identical topological configuration but different set of spin network labels; each outcome has a certain probability amplitude. However, the original set of topological quantities of a braid which is essential for the braid to be propagating and interacting is still valid even after spin network labels are considered.
We know that the very notion of particles in local quantum field theories depends on the background geometry of the theories. Our braids also depend on the spin network they live on. In spite of the fact that how to take a physically meaningful semi-classical limit of our approach remains an open question, the matter particles resulting from the braids in a semi-classical background as a reasonable such limit of superposed spin networks would be expected to depend on the background geometry as well.
It is therefore very interesting and necessary to study the effects of spin networks in our future research. Our companion paper [11] also indicates, from another perspective, the necessity of taking into account spin network labels. | 12,180 | 2008-05-05T00:00:00.000 | [
"Physics"
] |
Models for Quadratic Algebras Associated with Second Order Superintegrable Systems in 2D
There are 13 equivalence classes of 2D second order quantum and classical superintegrable systems with nontrivial potential, each associated with a quadratic algebra of hidden symmetries. We study the finite and infinite irreducible representations of the quantum quadratic algebras though the construction of models in which the symmetries act on spaces of functions of a single complex variable via either differential operators or difference operators. In another paper we have already carried out parts of this analysis for the generic nondegenerate superintegrable system on the complex 2-sphere. Here we carry it out for a degenerate superintegrable system on the 2-sphere. We point out the connection between our results and a position dependent mass Hamiltonian studied by Quesne. We also show how to derive simple models of the classical quadratic algebras for superintegrable systems and then obtain the quantum models from the classical models, even though the classical and quantum quadratic algebras are distinct.
Introduction
A classical (or quantum) mth order superintegrable system is an integrable n-dimensional Hamiltonian system with potential that admits 2n − 1 functionally independent constants of the motion, the maximum possible, and such that the constants of the motion are polynomial of at most order m in the momenta. Such systems are of special significance in mathematical physics because the trajectories of the classical motions can be determined by algebraic means alone, whereas the quantum eigenvalues for the energy and the other symmetry operators can also be determined by algebraic methods. In contrast to merely integrable systems, they can be solved in multiple ways. The best known (and historically most important) examples are the classical Kepler system and the quantum Coulomb (hydrogen atom) system, as well as the isotropic oscillator. For these examples m = 2 and the most complete classification and structure results are known for the second order case. There is an extensive literature on the subject [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22], with a recent new burst of activity [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]. All such systems have been classified for real and complex Riemannian spaces with n = 2 and their associated quadratic algebras of symmetries computed [15,29,30,24,31]. For nonconstant potentials there are 13 equivalence classes of such stems (under the Stäckel transform between manifolds), 7 with nondegenerate (3-parameter) potentials and 6 with degenerate (1-parameter) potentials [44,30]. The constants of the motion for each system generate a quadratic algebra that closes at order 6 in the nondegenerate case and at order 4 in the degenerate case.
The representation theory of such algebras is of great interest because it is this quadratic algebra "hidden symmetry" that accounts for the degeneracies of the energy levels of the quantum systems and the ability to compute all associated spectra of such systems by algebraic means alone. In principle, all of these quadratic algebras can be obtained from the quadratic algebra of a single generic 3-parameter potential on the complex two-sphere by prescribed limit operations and through Stäckel transforms. However, these limiting operations are not yet sufficiently understood. Each equivalence class has special properties, and each of the 13 cases is worthy of study in its own right. A powerful technique for carrying out this study is the use of "one variable models". In the quantum case these are realizations of the quadratic algebra (on an energy eigenspace) in terms of differential or difference operators acting on a space of functions of a single complex variable, and for which the energy eigenvalue is constant. Each model is adapted to the spectral decomposition of one of the symmetry operators, in particular, one that is associated with variable separation in the original quantum system. The possible irreducible representations can be constructed on these spaces with function space inner or bilinear products (as appropriate) and intertwining operators to map the representation space to the solution space of the associated quantum system. (There have been several elegant treatments of the representation theory of some quadratic algebras, e.g. [16,17,18,5,6,19,37]. However these have almost always been restricted to finite dimensional and unitary representations and the question of determining all one variable models has not been addressed.) In the classical case these are realizations of the quadratic algebra (restricted to a constant energy surface) by functions of a single pair of canonical conjugate variables.
In [39] we have already carried out parts of this analysis for the generic nondegenerate superintegrable system on the complex 2-sphere. There the potential was V = a 1 /s 2 1 + a 2 /s 2 2 + a 3 /s 2 3 where s 2 1 + s 2 2 + s 2 3 = 1, and the one variable quantum model was expressed in terms of difference operators. It gave exactly the algebra that describes the Wilson and Racah polynomials in their full generality. In this paper we treat a superintegrable case with a degenerate potential. Our example is again on the complex 2-sphere, but now the potential is V = α/s 2 3 . Though this potential is a restriction of the generic potential, the degenerate case admits a Killing vector so the quadratic algebra structure changes dramatically. The associated quadratic algebra closes at level 4 and has a richer representation theory than the nondegenerate case. Now we find one variable models for an irreducible representation expressed as either difference or differential operators, or sometimes both. We show that this system can occur in unobvious ways, such as in a position dependent mass Hamiltonian recently introduced by Quesne [37].
The second part of the paper concerns models of classical quadratic algebras. Here we inaugurate this study, in particular its relationship to quantum models of superintegrable systems. We first describe how these classical models arise out of standard Hamilton-Jacobi theory. In [44,27] we have shown that for second order superintegrable systems in two dimensions there is a 1-1 relationship between classical quadratic algebras and quantum quadratic algebras, even though these algebras are not isomorphic. In this sense the quantum quadratic algebra, the spectral theory for its irreducible representations and its possible one variable models are already uniquely determined by the classical system. We make this concrete by showing explicitly how the possible classical models of the classical superintegrable system with potential V = α/s 2 3 lead directly to the possible one variable differential or difference operator models for the quantum quadratic algebra. Then we repeat this analysis for the nondegenerate potential V = a 1 /s 2 1 + a 2 /s 2 2 + a 3 /s 2 3 where the quantum model is essentially the Racah algebra QR (3) and its infinite dimensional extension to describe the Wilson polynomials. Our results show that the Wilson polynomial structure is already imbedded in the classical system with potential V = a 1 /s 2 1 + a 2 /s 2 2 + a 3 /s 2 3 , even though this potential admits no Lie symmetries. Thus the properties of the Wilson polynomials in their full generality could have been derived directly from classical mechanics! This work is part of a long term project to study the structure and representation theory for quadratic algebras associated with superintegrable systems in n dimensions [23,24,25,26,27,35,36]. The analysis for n = 3 dimensions will be much more challenging, but also a good indication of behavior for general n.
The structure equations for S3
Up to a Stäckel transform, every 2D second order superintegrable system with nonconstant potential is equivalent to one of 13 systems [44]. There is a representative from each equivalence class on either the complex 2-sphere or complex Euclidean space. In several papers, in particular [15], we have classified all of the constant curvature superintegrable systems, and this paper focuses on two systems contained in that list: S9 and S3. The quadratic algebra of the generic nondegenerate system S9 was already treated in [39] and we will return to it again in this paper. First we study the quadratic algebra representation theory for the degenerate potential S3. This one-parameter potential 2-sphere system corresponds to the potential where s 2 1 + s 2 2 + s 2 3 = 1 is the imbedding of the sphere in Euclidean space. The quantum degenerate superintegrable system is where J 3 = s 1 ∂ s 2 − s 2 ∂ s 1 and J 2 , J 3 are obtained by cyclic permutations of the indices 1, 2, 3. The basis symmetries are where J 3 = s 2 ∂ s 1 − s 1 ∂ s 2 plus cyclic permutations. They generate a quadratic algebra that closes at order 4. The quadratic algebra relations are [H, X] = [H, L j ] = 0 and The Casimir relation is We know that the quantum Schrödinger equation separates in spherical coordinates, and that corresponding to a fixed energy eigenvalue H, the eigenvalues of X take the linear form where n is an integer, so we will look for irreducible representations of the quadratic algebra such that the representation space has a basis of eigenvectors f n with corresponding eigenvalues λ n . (Indeed, from the analysis of [16] or of [6] the structure equations imply that the spectrum of X must be of this form.) We will use the abstract structure equations to list the corresponding representations and compute the action of L 1 and L 2 on an X basis. Thus, we assume that there is a basis {f n }, for the representation space such that Here, A, B are not yet fixed. We do not impose any inner product space structure. From these assumptions we can compute the action of L 1 and L 2 on the basis. Indeed, On the other hand, from the equations (1) we have Now we equate equations (4) with (5) or (6). For j = n, equating coefficients of f n in the resulting identities yields the conditions D(n, n) = 0, Similarly, equating coefficients of f j in the case j = n yields A(n − j)D(j, n) = −2C(j, n), A(n − j)C(j, n) = 2D(j, n), Thus, either C(j, n and D(j, n) vanish or A 2 (n−j) 2 = −4. We can scale A such that the smallest nonzero jump is for j = n ± 1, in which case A = ±2i. By replacing n by −n if necessary, we can assume A = 2i. (We also set B = iµ.) Thus the only possible nonzero values of C(j, n), D(j, n) are for j = n, n ± 1 and there are the relations D(n + 1, n) = −iC(n + 1, n), D(n − 1, n) = iC(n − 1, n).
Comparing (3) and (7) and equating coefficients of f n±2 , f n±1 , respectively, on both sides of the resulting identities, we do not obtain new conditions. However, equating coefficients of f n results in the condition where F n = C(n, n − 1)C(n − 1, n). The general solution of this difference equation is where κ is an arbitrary constant.
To determine κ we substitute these results into the Casimir equation (2) and set equal to zero the coefficients of f j in the expression Cf n = 0. For j = n we get nothing new. However, j = n we find Thus, F n = C(n, n − 1)C(n − 1, n) is an explicit 4th order polynomial in n. By factoring this polynomial in various ways, and re-normalizing the basis vectors f n appropriately via f n → c(n)f n , we can achieve a realization of the action of L 1 and L 2 such that and all of the coefficients are polynomials in n. The 4 roots of F n are so a convenient factorization is From these expressions and from we see that we can find C, D coefficients in which the the dependence on n is always as a polynomial.
There are raising and lowering operators Indeed, To get a one-variable model of the quadratic algebra in terms of second order differential operators, we can simply make the choices f n (t) = t n , X = i(2t d dt + µ) and define L 1 from expressions (9) via the prescription with a similar procedure for L 2 .
In general the irreducible representations that we have defined are infinite dimensional and the basis vectors f n occur for all positive and negative integers n. We can obtain representations bounded below, and with lowest weight µ for −iX and corresponding lowest weight vector f 0 , simply by requiring F 0 = 0, which amounts to setting κ = 0. For convenience we set α = 1/4−a 2 . Then we have Since κ = 0, (8), H must be a solution of this quadratic equation: A convenient choice is If µ is not a negative integer then this bounded below representation is infinite dimensional. However, if there is a highest weight vector f m then we must have At this point it is worth pointing out that all of the finite dimensional, infinite dimensional bounded below, and general infinite dimensional irreducible representations and models of the quadratic algebra associated with the superintegrable system S3 are of direct interest and applicability. A similar argument was made in [39] where we gave examples of various analytic function expansions and distinct unitary structures associated with one superintegrable system. The original S3 quantum system is given in terms of complex variables with no specific inner product structure imposed. One could use the representation theory results to describe eigenfunction expansions simply in terms of analytic functions. If one wants an inner product structure or bilinear product structure, it is merely necessary to impose the structure on a single eigenspace of H, and there are a variety of ways to do this. For example, one could restrict the complex system to the real sphere and impose the standard inner product for that case. Alternatively, one could restrict to the real hyperboloid of one sheet, or the real hyperboloid of two sheets. In all cases the models of the irreducible representations are relevant, though not necessarily to one special case, such as the real sphere with the standard inner product. While we have no direct proof that all models of irreducible representations of quadratic algebras obtained in this way lead to representations for some version of the original superintegrable system, we have no counterexamples. For a deeper analysis we need to construct intertwining operators that relate basis functions for the model with eigenfunctions of the quantum Hamiltonian.
Differential operator models
A convenient realization of the finite dimensional representations by differential operators in one complex variable is This model is also correct for infinite dimensional bounded-below representations, except that now the lowest weight is µ = −m where m = 0, 1, 2, . . . is a complex number. The raising and lowering operators for the model are In the finite dimensional case, for example, the eigenvalues of L 1 are and the corresponding unnormalized eigenfunctions are Now, motivated by the quantum mechanical system on the real 2-sphere, we impose a Hilbert space structure on the irreducible representations such that L 1 and L 2 are self-adjoint and X is skew adjoint: Xf n , f n ′ = − f n , Xf n ′ .
Writing φ n = k n f n where φ n has norm 1, we have the recursion relation For infinite dimensional bounded below representations k 2 n must be positive for all integers n ≥ 0, and we normalize k 0 = 1. Thus For finite dimensional representations we have µ = −m. Normalizing k 0 = 1, (possible for a < 1 or for a > m), we find that an orthonormal basis in the one variable model is given by φ n (t) = k n f n (t) = k n t n , n = 0, 1, . . . , m where Note the reflection symmetry ||f n || = ||f m−n ||.
To derive a realization of the Hilbert space for the differential operator models of the finite dimensional and infinite dimensional bounded below unitary representations in terms of a function space inner product where p, q are polynomials and K is a normalization constant, we use the formal self-and skew-adjoint requirements and obtain a differential equation for the weight function: where ζ = tt. The solution that vanishes at ζ = 1 for a < 1/2 and is integrable at ζ = 0 for a + µ > −1 is (Note that the integral is an even function of Q.) At ζ = 0 this function has a branch point with behavior ζ a+µ . We write t = re iθ , t = re −iθ , ζ = r 2 and choose our contours of integration for the inner product as the unit circle |e iθ | = 1, i.e., 0 ≤ θ ≤ 2π and, in the complex ζ-plane, a contour that starts at ζ = 1 and travels just above the real ζ-axis to circle ζ = 0 once in the counterclockwise direction and returns to ζ = 1 just below the real ζ-axis. We require that 1, 1 = 1. By choosing a regime where a + µ > −1 we can shrink the ζ-contour about ζ = 0 so that the norm takes the form where we have integrated term-by-term and then made use of Gauss' Theorem for the summation of 2 F 1 (1). This gives us the value for K 1 such that 1, 1 = 1. Now, the result extends for the original contour by analytic continuation. This defines a pre-Hilbert space inner product that then can be extended to obtain a true Hilbert space. The contour integral for the inner product obtained in the previous paragraph requires Re a < 1 for convergence, and this doesn't hold for some of the unitary irreducible representations defined above. Accordingly, we consider a second solution of the weight function equation. The solution that vanishes at ζ = 0 for a + µ > 0 and is integrable at ζ = 1 for a < 1 is At ζ = 1 this function has a branch point with behavior (1−ζ) 1−2a . We write t = re iθ , t = re −iθ , ζ = r 2 and choose our contours of integration for the inner product as the unit circle |e iθ | = 1 and, in the complex ζ-plane, a contour that starts at ζ = 0 and travels just below the real ζ-axis to circle ζ = 1 once in the counterclockwise direction and returns to ζ = 0 just above the real ζ-axis. This integral converges for Re (a + µ) > −1. We require that 1, 1 = 1. By choosing a regime where a < 1 we can shrink the ζ-contour about ζ = 1 so that the norm takes the form ) .
This gives us the value for K such that 1, 1 = 1, and the result extends by analytic continuation to all values of a, µ for which the original contour integral converges. Thus we have an explicit pre-Hilbert function space inner product for each of our differential operator models. In the finite dimensional case we have the reproducing kernel function In the infinite dimensional bounded below case we have the reproducing kernel function which converges as an analytic function and in the Hilbert space norm for |s| < 1. Here, In each case f (t), δ(t, s) = f (s) for f in the Hilbert space.
Difference operator models
There are also difference operator models for the representations of the S3 quadratic algebra. We first give the details for the finite dimensional representations indexed by the nonnegative integer m. Here the operator L 1 is diagonalized: Here f n is a polynomial of order n in the variable λ(t), a special case of the family of dual Hahn polynomials [45, p. 346]. These polynomials are orthogonal with respect to a measure with support at the values t = 0, 1, . . . , m, in agreement with equation (14) for the eigenvalues of L 1 . Indeed, we have (for a < 1) For the infinite dimensional, bounded below, case we have The basis functions are f n (t) = (−1) n s n (t 2 ) where Here f n is a polynomial of order n in the variable t 2 , a special case of the family of continuous dual Hahn polynomials [45, p. 331]. The orthogonality and normalization are given by 1 2π In summary, we have found the following possibilities for bounded below irreducible representations such that L 1 , L 2 are self-adjoint and X is skew adjoint, together with associated one variable models. (Here, n 0 is a positive integer.) representation parameter range model finite dimensional µ = −m, m = 0, 1, 2, . . . differential operators either a < 1 or a + µ > 0 difference operators inf. dim. bdd. below µ > 0 differential operators a < 1 and a + µ > 0 difference operators inf, dim. bdd. below 0 > µ = −n 0 + t, t ∈ (0, 1) differential operators a = n 0 + s, s ∈ (0, 1) inf. dim. bdd. below 0 > µ = −n 0 + t, t ∈ (0, 1) differential operators −t < a < 1 − t 5 Quesne's position dependent mass (PDM) system in a two-dimensional semi-infinite layer In [37] Quesne considered a superintegrable exactly solvable position dependent mass (PDM) system in a two-dimensional semi-infinite layer. Her system is equivalent via a gauge transformation to a standard quantum mechanical problem on the real 2-sphere with potential of the form S3. Indeed, in Quesne's paper we are given the Hamiltonian We adopt coordinates on the unit sphere as where s 2 1 + s 2 2 + s 2 3 = 1 and the metric is ds 2 = q 2 (dx 2 + dy 2 )/ cosh 2 qx. The Laplacian becomes In these coordinates, the degenerate superintegrable system S3 becomes By a gauge transform H O = (cosh qx) −1 H S cosh qx, we get which has solutions k = a + 1/2 or k = −a + 1/2. Since k is assumed positive and a is required to be less than 1, we take a < 0. Suppose we find an eigenvector for H S with eigenvalue λ S , call it v λ S . Then λ Q will be the eigenvalue of v λ Q for H Q . We have the transformations v λ Q = v λS /cosh qx and λ Q = −q 2 λ S + q 2 (1/4−α 2 ) = −q 2 λ S −q 2 k(k−1). Checking the two eigenvalues, we have λ S = −(µ−1+a) 2 +1/4 and λ Q = q 2 (N + 2)(N + 2k + 1). We note that these two values coincide when −µ = m = N + 1 with m an integer.
Using the above calculations and the eigenfunctions given in the paper, we can obtain eigenfunctions for the S3 case as where m = 2n + ℓ+ 1, and χ ℓ (y) = sin[(ℓ+ 1)qy] or cos[(ℓ+ 1)qy]. We can rewrite these by noting 1/cosh 2 qx = 1 − tanh 2 qx so that we can write sin qy = s 1 / 1 − s 2 3 and cos qy = s 2 / 1 − s 2 3 , then we obtain χ ℓ (y) = a n T ℓ+1 s 2 where T ℓ and U ℓ are the Chebyshev polynomials of the first and second kind, respectively. Quesne found the S3 quadratic algebra (which closes at order 4) but did not use it for spectral analysis purposes because her problem involved a boundary condition that broke the full quadratic algebra symmetry. Instead she considered her system as a special case of S9 and used the more complicated S9 symmetry algebra that closes at order 6 to find the finite dimensional representations. (Note that although the 1-parameter S3 potential is a limit of the 3parameter S9 potential as two of the parameters go to 0, a discontinuity occurs in the structure of the quadratic algebra. A first order symmetry appears and the number of second order symmetries jumps from 3 to 4.) Quesne's point of view has merit, but it complicates the spectral analysis of the problem, since the only one-variable model is in terms of difference operators and Racah polynomials. From our vantage point of one one variable differential operator analysis for the model, Quesne's boundary conditions amount to decomposing an irreducible subspace corresponding to an m-dimensional representation into a direct sum of even and odd parity subspaces V + , V − . (Indeed her boundary conditions require choice of χ ℓ (y) in the cosine form for ℓ even and in the sine form for ℓ odd.) Let P be the operator P f (t) = t m f (1/t). Since P 2 = I and ||f n || = ||f m−n ||, it is clear that P is unitary. We define unit vectors Then for m = 2k the vectors Φ + ℓ , ℓ = 0, . . . , k form an orthonormal basis for V + m and the vectors Φ − ℓ , ℓ = 0, . . . , k − 1 form an on basis for V − m . For m = 2k − 1, the vectors Φ + ℓ , ℓ = 0, . . . , k − 1 form an ON basis for V + m and the vectors Φ − ℓ , ℓ = 0, . . . , k − 1 form an orthonormal basis for V − m . These basis vectors are very easily expressible in terms of the one variable differential operator model, where they are sums of two monomials. The basis used by Quesne corresponds to the V − subspaces.Thus our models can be used to carry out the spectral analysis for this PDM system, and they yield a simplification.
Classical models for S3
Now we describe how the methods of classical mechanics lead directly to the quantum models. The classical system S3 on the 2-sphere is determined by the Hamiltonian where J 1 = s 2 p 3 −s 3 p 2 and J 2 , J 3 are cyclic permutations of this expression. For computational convenience we have imbedded the 2-sphere in Euclidean 3-space. Thus we use the Poisson bracket for our computations, but at the end we restrict to the sphere s 2 1 + s 2 2 + s 2 3 = 1. The classical basis for the constants of the motion is The structure relations are (16) and the Casimir relation is From the results of [46] we know that additive separation of variables in the Hamilton-Jacobi equation H = E is possible in subgroup type coordinates in which X , L 1 or S = 2(L 1 − iL 2 ) − H + X 2 , respectively, are constants of separation. This corresponds to two choices of spherical coordinates and one of horospherical coordinates, respectively. We seek two variable models for the Poisson bracket relations (16), (17). There is also separation in ellipsoidal coordinates (i.e., non-subgroup type coordinates) but we will not make use of this here. The justification for these models comes from Hamilton-Jacobi theory. The phase space for our problem is 4-dimensional. Thus it is possible to find canonical variables H, I, Q, P such that {I, H} = {P, Q} = 1 and all other Poisson brackets vanish. In terms of H and the other canonical variables the Poisson bracket can be expressed as (As follows from standard theory [47] one can construct a set of such canonical variables from a complete integral of the Hamilton-Jacobi equation. Our 2D second order superintegrable systems are always multiseparable, and each separable solution of the Hamilton-Jacobi equation provides a complete integral. Thus we can find these canonical variables in several distinct ways.) Now we restrict our attention to the algebra of constants of the motion. This algebra is generated by H, L 1 , L 1 , X , subject to the relation (17). Thus, considered as functions of the canonical variables, the constants of the motion are independent of H. If we further restrict the system to the constant energy space H = E then we can consider H as non varying and every constant of the motion F can be expressed in the form F(E, Q, P). This means that the Poisson bracket of two constants of the motion, F, G can be computed as Thus all functions depend on only two canonically conjugate variables Q, P and the parameter E. This shows the existence and the form of two variable models of conjugate variables. However the proof is not constructive and, furthermore, it is not unique. Two obtain constructive results we will use the strategy of setting Q equal to one of the constants of the motion that corresponds to separation of variables in some coordinate system, and then use (18) for the Poisson bracket and require that relations (16), (17) hold. In order to make clear that we are computing on the constant energy hypersurface expressed in canonical variables we will use a different notation. We will set Q E = c, P E = β so, F(H, Q, P) = f (c, β), G(H, Q, P) = g(c, β), and For our first model we require X ≡ X E = c. Substituting this requirement and H = E into the structure equations we obtain the result In this model, and in all other classical models, β is not uniquely determined: we can replace it by β ′ = β + k(c) for any function k(c) and the variables c and β ′ remain canonically conjugate. For a second model we require L 1 ≡ (L 1 ) E = c and proceed in a similar fashion. The result is For the third and last model we need to diagonalize the symmetry S = 2(L 1 − iL 2 ) − H + X 2 corresponding to separation in horospherical coordinates. For this it is convenient to rewrite the structure equations (16), (17) in terms of the new basis S, L 1 + iL 2 , X : {S, X , } = 2i(S + α), {S, L 1 + iL 2 } = −2iX (S − 2X 2 + 2H + 3α), The Casimir relation is For model III we set S = c and obtain III : S = c, X = −2i(c + α)β,
Classical model → quantum model
What have we achieved with these classical models? For one thing they show us how to parameterize the constants of the motion and exhibit their functional dependence. More important for our purposes, they give us a rational means to derive the possible one-variable quantum models. This may seem surprising. How can classical mechanics determine quantum mechanics uniquely? How can structures such as the Wilson family of orthogonal polynomials, containing the Hahn polynomials, be derived directly from classical mechanics? The point is that the structures we are studying are second order superintegrable systems in 2D. In papers [30,44,27] it has been shown that there is a 1-1 relationship between the quantum and classical versions for such systems, for all 2D Riemannian spaces. (Similarly there is a 1-1 relationship in 3D for nondegenerate potentials on conformally flat spaces.) The structures are not identical, since as we can see from the examples in this paper, the structure relations in the classical and quantum cases are not identical; there are quantum modifications of the classical equations. Although we know of no direct prescription for their determination, nonetheless the quantum structure equations are uniquely determined by the classical structure equations. Given a second system of second order constants of the motion we write down the corresponding quantum system via the usual correspondence, where products of classical functions are replaced by symmetrized quantum operators, and generate the quadratic algebra by taking repeated commutators. Even order classical symmetries correspond to formally self-adjoint quantum symmetries, and odd order classical symmetries correspond to formally skew-adjoint quantum symmetries. (This relationship no longer holds for third order superintegrable systems [21,40].) We will demonstrate here how to get quantum models from the classical ones that we have derived. The basic prescription for the transition from the classical case to the operator case is to replace a pair of canonically conjugate variables c, β by c → t, β → ∂ t . (There is no obstruction to quantization for second order superintegrable systems.) Once an appropriate choice of β is made in a classical model, we can use this prescription to go to a differential operator model of the quantum structure equations. In particular model III above suggests a operator model such that S is multiplication by c, X is a first order differential operator in c and L 1 + iL 2 is a fourth order differential operator. The result, whose existence is implied by the 1-1 classical/quantum relationship for second order superintegrable systems, is The leading order differential operators terms agree with the classical case but there are lower order correction terms needed to correct for the noncommutivity of t and ∂ t . We can realize various irreducible representations of the quadratic algebra by choosing subspaces of functions of t on which the operators act. This model agrees with (10), (11), (12) in the case where C(n − 1, n) = 1 and C(n + 1, n) is fourth order. However, there we had a space spanned by a countable number of eigenvectors of the skew-adjoint symmetry X whereas here we want the spectral decomposition of the self-adjoint symmetry S to govern the model. This forces L 2 to be skew-adjoint and X to be self-adjoint. Thus, though the differential operators are formally the same, the Hilbert spaces and the spectral analysis are different. All the representations are infinite-dimensional. One class can be realized by closing the dense subspace of C ∞ functions with compact support on 0 < t < ∞ where the measure is dt/t. The the spectrum of S is continuous and runs over the positive real axis. Here X also has continuous real spectra covering the full real axis. In particular the generalized eigenfunction of X with real eigenvalue λ is proportional to t −iλ , and µ is pure imaginary. Thus the spectral analysis of X is given by the Mellin transform. There is a similar irreducible representation defined on −∞ < t < 0. By a canonical transformation we can also get models of these representations in which both C(n − 1, n) = 1 and C(n + 1, n) are second order. (We shall illustrate this explicitly for model I.) Then the spectral decomposition of S is given by the Hankel transform. Since these particular eigenspaces of H admit no discrete spectrum for any of the symmetries of interest, we shall not analyze them further. Now we consider model I, (19). Due to the presence of trigonometric terms in β we cannot realize this as a finite order differential operator model. However, we can perform a hodograph transformation, i.e. use the prescription β → t, c → −∂ t to realize the model. This would seem to make no sense due to the appearance of functions of c under the square root sign. However, before using the prescription we can make use of the freedom to make a replacement β ′ = β +g(c) which preserves canonical variables. We choose but leave c unchanged. Then we find with X as before. Now we apply the quantization prescription β → t, c → −∂ t and obtain a model in which both L 1 and L 2 are fourth order and X is a first order differential operator. This is, in fact, identical to within a coordinate change to model (21). One might also try to obtain a difference operator model from (22) with the replacement c → t, β → ∂ t , so that e 2iβ would become a difference operator. However, this difference operator quantum model is equivalent to what would get from the β → t, c → −∂ t model by taking a Fourier transform. Thus we don't regard it as new.
There is an alternate way to obtain a quantum realization from model I. We use the fact that Now we let 2β → 2β + φ to obtain Now the prescription β → t, c → −∂ t leads to a quantum realization of L 1 , L 2 by second order differential operators. Indeed Here ξ is arbitrary and can be removed via a gauge transformation. The change of variable τ = e 2it reduces this model to the form (13). This shows that the flexibility we had in constructing differential operator models from the abstract representation theory by renormalizing our basis vectors f n is replaced in the classical model case by appropriate canonical transformations c → c, β → β + g(c). In either case there is essentially only one differential operator model that can be transformed in various ways.
It is clear that model II cannot produce finite order differential operator realizations of the quantum quadratic algebra, due to the intertwining of square root dependence for c and exponential dependence for β. However, it will produce a difference operator realization via Taylor's theorem: e a∂t f (t) = f (t + a).To show this explicitly we make a coordinate change such that 2 √ c + α∂ c = ∂ C in (20), which suggests realizations of the quantum operators in the form A straightforward computation shows that the quantum algebra structure equations are satisfied if and only if Since α = −a 2 + 1 4 and E = −(µ − 1 + a) 2 + 1 4 for bounded below representations, we can factor (24) simply to obtain Note that only the product (24) is determined, not the individual factors. Thus we can choose h(t), say, as an arbitrary nonzero function and then determine m(t) from (24). All these modifications of the factors are accomplished by gauge transformations on the representation space:f (t) = ρ(t)f (t) where ρ(t) is the gauge function. If we choose the factors in the form then we we get exactly the model (15). The finite dimensional model is related by the simple change of variables t → i(t − a + 1/2), µ = −m. In any case, there is only a single solution of these equations, up to a gauge transformation.
The classical model for S9
This is the system on the complex sphere, with nondegenerate potential where s 2 1 + s 2 2 + s 2 3 = 1. The classical S9 system has a basis of symmetries where H = L 1 + L 2 + L 3 + a 1 + a 2 + a 3 and the J i are defined by J 3 = s 1 p s 2 − s 2 p s 1 and cyclic permutation of indices. The classical structure relations are Taking L 1 = c, H = E with c, β as conjugate variables, we find the model This suggests a difference operator realization of the quantum model. In the quantum case the symmetry operators L 1 , L 2 , L 3 are obtained from the corresponding classical constants of the motion (26) through the replacements J k → J k where the angular momentum operators J k are defined by J 3 = x 1 ∂ x 2 − x 2 ∂ x 1 and cyclic permutation of indices.
Here i, j, k are chosen such that ǫ ijk = 1 where ǫ is the pure skew-symmetric tensor, R = [L 1 , L 2 ] and {L 1 , L j } = L i L j + L j L i with an analogous definition of {L 1 , L 2 , L 3 } as a sum of 6 terms.
In practice we will substitute L 3 = H − L 1 − L 2 − a 1 − a 2 − a 3 into these equations. Proceeding exactly as in the S3 case (23), (24), (25), we find that the difference operator analogy of (27) for the quantum quadratic algebra is The quadratic terms factor into simple linear terms, and just as in the S3 case, it is only ℓ(t) and the product h(t)m(t + i) that is uniquely determined. We can change the individual factors h(t), m(t) by a gauge transformation. With the change of variable t = iτ and a gauge transformation to an operator with maximal symmetry in τ , we obtain the standard model It follows that L 2 =h(τ )E +1 +m(τ )E −1 +l(τ ) is a linear combination of L 1 and the difference operator whose eigenfunctions are the Wilson polynomials, just as found in [39].
Conclusions and prospects
This paper consists of two related parts. In the first part we have studied the representation theory for the quadratic algebra associated with a 2D second order quantum superintegrable system with degenerate potential, namely S3. We have classified the possible finite-dimensional representations and infinite dimensional bounded below representations, i.e., those with a lowest weight vector. Then we have constructed the possible Hilbert space models for these representations, in terms of differential operators or of difference operators acting on spaces of functions of one complex variable. These models make it easy to find raising and lowering operators for the representations and to uncover relationships between the algebras and families of orthogonal polynomials. Here S3 has been treated as an example of a degenerate potential superintegrable system. The example S9 of a nondegenerate potential was treated in [39]. In 2D there are 13 equivalence classes of superintegrable systems with nontrivial potentials: 7 nondegenerate and 6 degenerate. Results for all of these cases will be included in the thesis of the third author.
In the second part of this work we have taken up the study of models of the quadratic algebras associated with the classical second order superintegrable systems. In each model there is only a single pair of canonically conjugate variables, rather than the 2 pairs in the original classical system. We showed, based on classical Hamilton-Jacobi theory, that such models always exist. Then we described a procedure to derive the one variable models for the quantum quadratic algebras from the models for the classical quadratic algebras. Since it is known that there is a 1-1 relationship between classical and quantum second order superintegrable systems (even though the algebras are not the same), it is not too surprising that one should be able to compute the quantum models from the classical models. However, we have made this explicit.
We applied this procedure not only to obtain the differential and difference operator models for system S3, but also for the generic system S9. For S9 we showed that there is a difference operator model associated with general Wilson polynomials, but no differential operator model. This construction demonstrates that the theory of general Wilson polynomials is imbedded in classical mechanics in a manner quite different from the usual group theory (Racah polynomial) approach.
There is much more work to be done. Once the models are worked out and the corresponding functional Hilbert spaces are constructed, usually Hilbert spaces with kernel function, then one needs to find intertwining operators that map the model space to the space on which the quantum Schrödinger operator is defined. Also,we have demonstrated how to determine the classical models and show how they quantize in a unique fashion. A puzzle here is that we are finding classical models corresponding to non-hypergeometric type variable separation. These classical models typically involve elliptic functions. We do not yet understand how they can be quantized. They clearly do not lead to differential or ordinary difference operator quantum models.
Another part of our effort is to study the structure of quadratic algebras corresponding to 3D nondegenerate superintegrable systems, and to find two variable models for them. This is a much more difficult problem than in 2D, where it led to general Wilson and Racah polynomials, among other models. The quadratic algebra still closes at order 6 but now there are 6 linearly second order symmetries, rather than 3, and they are functionally dependent, satisfying a polynomial relation of order 8. There are 4 commutators, instead of 1. For the models we expect to find multivariable extensions of Wilson polynomials, among many other constructs. | 10,512.6 | 2008-01-18T00:00:00.000 | [
"Physics"
] |
A"Vector-like chiral"fourth family to explain muon anomalies
The Standard Model (SM) is amended by one generation of quarks and leptons which are vector-like (VL) under the SM gauge group but chiral with respect to a new $\mathrm{U}(1)_{3-4}$ gauge symmetry. We show that this model can simultaneously explain the deviation of the muon $g-2$ as well as the observed anomalies in $b\rightarrow s\mu^+\mu^-$ transitions without conflicting with the data on Higgs decays, lepton flavor violation, or $B_s-\bar{B}_s$ mixing. The model is string theory motivated and GUT compatible, i.e. UV complete, and fits the data predicting VL quarks, leptons and a massive $Z'$ at the $\mathrm{TeV}$ scale, as well as $\tau\to3\mu$ and $\tau\to\mu\gamma$ within reach of future experiments. The Higgs couplings to SM generations are automatically aligned in flavor space.
INTRODUCTION
The Standard Model (SM) is a highly successful theory in predicting and fitting many experimental measurements, with few exceptions. One of the discrepancies between the SM prediction and experimental measurement that has been known for a long time, is the muon anomalous magnetic moment. The discrepancy between the measured value and the SM prediction is [1,2] ∆a µ = a exp µ − a SM µ = 288(63)(49) × 10 −11 .
A simple extension of the SM that can explain the discrepancy of the muon g − 2 are VL leptons that couple exclusively to muons [27][28][29]. On the other hand, the anomalies in b → sµ + µ − transitions can be explained by a new massive vector boson of a spontaneously broken U (1) µ−τ gauge symmetry and the introduction of VL quarks [30,31] (see [32] for a generalization of the new gauge symmetry). Indeed, it has been shown that combining an additional Z , VL leptons, and VL quarks one can successfully address both the muon g − 2 and the anomalous B physics observables simultaneously [33][34][35]. Typically these models predict significant deviations of the SM in h → µµ [28,29], h → µτ [31,34] and have an upper bound on the Z mass by keeping B s −B s oscillations close to their SM value [30].
In the present paper we suggest a holistic way of solving the discrepancies in (g − 2) µ and b → sµ + µ − . We amend the SM by one complete family of fermions, i.e. a full spinor representation of SO (10), which is VL with respect to the SM but chiral with respect to a new spontaneously broken "U (1) 3−4 " gauge symmetry. Under the new gauge group the third SM family and the left-handed part of the new "VL" family have charges +1 and −1, respectively, while all other fermions are neutral. Our model is motivated by heterotic string orbifold constructions [36][37][38][39][40][41][42][43], which, in addition to the full MSSM spectrum, typically contain myriads of states which are VL with respect to the SM gauge group, but chiral under new U (1) gauge symmetries. In addition, there are many SM singlet scalars that break the additional gauge symmetries, thus giving mass to the vector-like states and the extra gauge bosons. While in earlier constructions these extra states were lifted to the string scale, our model is a prototype of what can happen if at least one extra generation is kept light, i.e. at the TeV scale. Our analysis is not supersymmetric, but it could easily be extended to a supersymmetric model in which case gauge coupling unification is maintained.
We find that the model can simultaneously fit the observed quark and lepton masses, as well as the (g − 2) µ and b → sµ + µ − anomalies without violating bounds from electroweak precision observables, lepton flavor violating (LFV) decays or B s −B s mixing. Interestingly, the electroweak singlet Higgs boson couplings in our model are automatically aligned with the SM values to a very high degree. Contrary to [30,31,34] there is no upper bound on the Z mass from the B s −B s mixing constraint, simply because the "VL" fermions and the Z simultaneously obtain mass of the order of the U (1) 3−4 breaking scale.
To substantiate our arguments we present two data points that can fit all measured observables while predicting others. The masses of new quarks and leptons, as well as of the new Z are all at the TeV scale. The Z in our example has very suppressed couplings to the first family, meaning that Z production at the LHC is suppressed. B s −B s mixing is predicted to deviate from the SM at the level of a few percent. There are significant enhancements in BR(B s → K ( * ) ττ ) and BR(B s → φττ ), while R νν K ( * ) is suppressed. Furthermore, our best fit points predict BR(τ → µγ) and BR(τ → 3µ) in reach of upcoming experiments.
MODEL
The model under investigation is the SM with three right-handed neutrinos extended by a complete extra generation of left-chiral fields and a complete extra generation of rightchiral fields. Furthermore, we introduce a new "U (1) 3−4 " gauge symmetry under which the third SM generation as well as the left-chiral part of the fourth generation of particles is charged. The U (1) 3−4 gauge symmetry is spontaneously broken by the vacuum expectation value (VEV) of the new scalar Φ. All fields and their corresponding quantum numbers are summarized in Table 1. The relevant part of the Lagrangian for this study is given by We take all couplings to be real and -in some GUT spirit -set many of them alike. Couplings to the up quark sector electroweak singlets (u a R , u 3 R , U R , and U L ) are not displayed because they will not be constrained by our analysis. It is summed over the repeated indices a = 1, 2 of the 2 ⊕ 1 flavor structure of the SM families which can, for example, originate from a D 4 flavor symmetry [37,40,[42][43][44][45]. The first and second families are distinguished by the direction of the D 4 breaking VEV ϕ a = δ a2 v ϕ . We assume this alignment to happen at a high-scale M (one should imagine M ∼ M string or M ∼ M GUT ), and the corresponding effective operator coefficient, thus, should be imagined as v ϕ ≡ Φ ϕ 2 /M whereΦ is a SM and U(1) 3−4 neutral scalar that gets a VEV around the weak scale. This justifies our assumption here that the first family does not directly mix with the VL states. We will focus on the flavor structure of the second and third generations in this study, remarking that the first family can always be fit in. A more detailed analysis should include all three families and their flavor physics, but that is beyond the scope of the present paper.
Charged Lepton and Down Quark Masses
The charged lepton mass terms are given bȳ where A, B = 1, .., 4 and the scalar VEVs and couplings are all assumed to be real. Analogously, the down quark mass terms are given bȳ with The matrix M d has exactly the same structure as M with the replacements λ E → λ D , λ L → λ Q , y τ → y b , and y µ → y s . Let U ,d L and U ,d R be unitary matrices that diagonalize the respective mass matrix, The physical fields in the mass basis are then given by
Neutrino Masses
Defining the vectors the neutrino masses can be written as where α, β = 1, .., 8 and N C L = N R . The Dirac mass terms M D have the same structure as M with the replacements λ E → λ N , y τ → y ν 1 , and y µ → y ν 2 . The Majorana mass terms with all other elements being zero. Assuming the hierarchy M ∼ M L,R v Φ,ϕ v the neutrino mass matrix can be analytically diagonalized and we give details about that in App. A. The physical states arê with corresponding masses up to corrections of the order v 2 (Φ,ϕ) /M . There are four sterile neutrinos with mass at the high scale. Furthermore, there are two light active neutrinos with mass of order M 2 W /M GU T and one (mostly) Dirac neutrino with a TeV scale mass (cf. App. A) Adding the first generation back in gives one additional high scale sterile neutrino and one additional light active neutrino.
Z-Lepton Couplings
The Z-lepton couplings in the mass basis are with coupling matricesĝ The un-hatted coupling matrices are in the gauge basis and given by where g Z,SM and we use the abbreviations s W = sin θ W , c W = cos θ W . Since these matrices are not proportional to the identity matrix, the Z-lepton couplings are not diagonal in the mass basis. Hence, this model has LFV Z boson decays, which are, however, only effective amongst the heavy VL quarks and leptons.
W -Lepton Couplings
The W -lepton couplings in the mass basis are with the 8 × 4 coupling matrices of the gauge basis
Z Couplings
The Z couplings to charged leptons in the mass basis are whereĝ with the U(1) 3−4 charge matrices The Z couplings here are not left-right symmetric, recall the charge assignment Tab. 1, and our skewed definition of the right-handed states in (13) and (16). The Z -down quark couplings in the mass basis are completely analogously given by The Z mediated flavor changing neutral currents (FCNC) between the SM 2 ↔ 3 generations are naturally suppressed because they only arise from the mixing with the heavy VL states.
The Z couplings to neutrinos in the mass basis can be written as with the couplingĝ and the gauge basis charge matrix
Higgs-Lepton Couplings
The couplings between the physical Higgs boson, h, and the charged leptons in the mass basis are with the gauge basis couplings A very interesting feature of this model is that the masses of the SM families are to a very high accuracy linear in the Higgs VEV. Thus, the Higgs couplings in the mass basis,Ŷ , are to a high precision diagonal in the lower 2 × 2 block. Hence, the Higgs couplings to the SM states are very much SM-like and there are no significant flavor violating Higgs couplings among the SM states. We give an analytic proof of this feature in App. B. Flavor off-diagonal couplings of the VL states (also to the SM states) can be sizable.
OBSERVABLES Lepton Non-Universality
Generally, our model gives rise to lepton non-universality in the operators O ( ) i=9,10 by treelevel Z exchange. The corresponding effective contributions to the Wilson coefficients are with the couplings These couplings are expressible solely through mixing matrix-elements, for example While we focus on the Z coupling to muons in order to explain the observed anomalies, our model also modifies the effective Wilson coefficients C ( ),τ τ 9,10 and C ( ),νν 9,10 leading to lepton nonuniversality also in BR(B s → K ( * ) ττ ) and BR(B s → φττ ). Quantitative results for these observables have been obtained using the formulas given in [46] from where we also adopt the SM prediction (cf. also [47,48]). The NP contributions to the Wilson coefficients C ( ),νν 9,10 affect the SM prediction for BR(B s → K ( * ) νν) and we have followed [49,50] to quantify these effects in our model.
Muon Anomalous Magnetic Moment
The W , Z and h contributions to the anomalous magnetic moment of the muon are very close to their SM values and we do not detail them here. On the other hand, the Z contribution can be sizeable, despite its TeV-scale mass. This is due to off-diagonal muon-Z couplings to VL leptons, allowing them to significantly contribute to the g − 2 loop. Since the VL leptons and Z masses are of the same scale, the leading order contribution has one power of m µ /M Z less than the naive flavor diagonal Z contribution (cf. e.g. the discussion in [51, sec. 7.2]). Parametrically, the dominant modification of (g − 2) µ in our model is of the size where the sum goes over the VL leptons. Naively this points to a scale v Φ ∼ 10 2 TeV, but the FC couplings to the VL leptons can easily be O(0.1). In addition, the contributions of individual VL leptons can partly cancel against one another and we will see this effect to be at work in our numerical analysis below. We give detailed formulas for δa Z µ in App. C.
There is a new tree-level contribution to B s −B s mixing due to Z exchange. We adopt the results and numerical factors from [30,52] and estimate the relative change of the mixing matrix element The most recently updated theoretical uncertainty shows that a deviation from the SM of δM 12 6% can currently not be excluded [53]. This gives an important constraint on the down sector flavor changing Z couplings. In our model the b − s coupling is suppressed because it only arises from the mixing with the heavy VL states such that the Z can be kept at the TeV scale consistent with the bound derived in [54].
Lepton Flavor Violating τ Decays
Tree-level Z exchange also induces the decay τ → 3µ. We follow Ref. [55,56] to estimate There is also a new contribution to τ → µγ due to Z and the new VL leptons in the loop, which is enhanced by a power of the VL mass. Using the results of [57] (cf. also [58][59][60]) we estimate the leading order contribution to be where the sum is over internal VL leptons. For the numerical analysis we have used a more detailed result which we present in App. D.
Other Observables
Flavor violating couplings of the Z or the SM scalar h to the SM families are generally suppressed far below their experimental thresholds. In contrast, flavor changing couplings to the heavy VL leptons can be large. Since our Z is heavy, neutrino trident production [30,61] does not give any important constraints. Lepton unitarity bounds (cf. e.g. [62,63]) are easily fulfilled. In addition, there are no constraints coming from the branching ratios for h → γγ or h → gg via loop diagrams, since these contributions are suppressed by factors of (v/M V L ) 2 (see e.g. [59,[64][65][66]).
Best fit point I. 15% to be consistent with the data and theoretical error of B s −B s mixing. All other observables are not constrained in the fit, i.e. they arise as predictions of our best fit points. However, there are currently more parameters than observables and so it is not excluded that there are more points that fit the data well with different predictions. The first family could always be included in the analysis in a straightforward way without affecting our conclusions. 2
Results
We have found two points which give a very good fit for the cases (I.) and (II.), they are listed in Tab. 2 (and in Tab. 4 in App. 4). We cannot find a good fit for C NP 9 −C NP 10 . The predictions of the best fit points are listed in Tab. 3 together with current experimental bounds. Effects on other observables such as h → µτ , h → γγ, h → gg, neutrino trident production or PMNS unitarity violation have also been considered, but they are robustly suppressed in this model and so we do not discuss them in detail. While Z → µτ is 2 Including the first family in the most straightforward way the Yukawa couplings and hence resulting mass matrices are extended to (and similarly for neutrinos and down quarks) practically absent at tree level, the dominant contribution arises from a one-loop diagram involving Z and VL leptons in the loop. A rough estimate of this shows that BR(Z → µτ ) nonetheless comes out well below the current bound of 1.2 × 10 −5 [2]. The loop contributions to (g − 2) µ from Z, W , and h are very close to their SM values while the FC Z exchange with VL leptons in the loop completely accounts for the anomaly. The Z contribution to B s −B s is mixing suppressed while larger flavor diagonal couplings to µμ can explain a sizable C NP 9 . The fit prefers a corner of the parameter space where λ E ∼ λ L which makes the Z couplings to leptons approximately left-right (anti-)symmetric (the couplings are LR symmetric or LR anti-symmetric depending on the specific coupling and we display the coupling matrices for one of the best fit points in App. E). The approximately LR symmetric couplings to the mu and tau leptons leads to an enhanced contribution to C NP 9 (and possibly also C 9 ) and a dramatic cancellation in the axial vector couplings to leptons C ( ) 10 . By contrast, the approximately equal but opposite LR (anti-)symmetric contributions of individual VL leptons to the muon g − 2 and τ → µγ cancel only to an order of magnitude. There are no cancellations in τ → 3µ where the couplings, in fact, add up constructively. Nevertheless, the tree level process τ → 3µ is suppressed by the naturally small flavor off-diagonal couplings of the Z to the SM fermion generations. The best-fit predictions for BR(τ → µγ) and BR(τ → 3µ) fall close to regions that can be probed by future experiments [75], cf. Fig. 1(a). A positive value of C 9 allows for a negative shift of δM 12 , thereby cushioning a 1.8σ tension between SM and experiment [53]. This happens to be the case for our best fit point (II.). 12 Fig. 1(a) and Fig. 1(c) show the current experimental exclusion [70,71] and prospects [75], respectively. The lower cross in Fig. 1(b) denotes the SM expectation [46] while the upper crosses show our best fit points with errors computed as in [46]. The green area in Fig. 1(d) shows the best fit value and errors as in eq. (1).
The NP contributions to the Wilson coefficients C ( ),τ τ 9,10 qualitatively follows the patterns in (51) but with reversed signs. BR(B s → Kττ ) and BR(B s → φττ ) are significantly increased compared to the SM, cf. Fig. 1(b). By contrast R νν K ( * ) is suppressed compared to the SM. The suppression of R νν K ( * ) together with a Z explanation of b → sµ + µ − is not in contradiction with the results of [49]. In our model, we find that the enhancement of R νν K ( * ) from the second family [49] is counter-acted by a vast suppression of the third family contributions.
We do not find any observable that causes problems. We stress again that the Z does, by construction, only very feebly couple to the SM first generation. The gauge coupling g is only very mildly constrained by the fit with a χ 2 < 3 region of g ∈ − [5.33(5.08), 0.32(0.39)] for point I.(II.). Bounds for family specific Z bosons can be obtained from the LHC √ s = 13 TeV searches [76,77] and range from M Z 1.3 − 2.0 TeV [78,79], while some models can survive with M Z as low as 500 GeV [80]. Deriving a robust limit for our model requires to specify the details of the first family couplings which is beyond the scope of this work. Nevertheless, our Z mass comes out right in the ballpark of current limits and so it can be searched for at the LHC in dimuon final states. If not with the full run 2 data, our best fit parameter regions will conclusively be tested by a high-luminosity run of the LHC.
We have scattered one thousand points randomly within the χ 2 < 1(3) regions around the best fit values. There are strong correlations among the masses of the pair of VL leptons, and between the VL lepton masses and the prediction for BR(τ → µγ) Fig. 1(c). Also BR(B s → Kττ ) and BR(B s → φττ ) are strongly correlated Fig. 1(b). All other predicted observables are largely uncorrelated, see e.g. Fig. 1(d).
Given the parameters of our best fit points we can also evaluate the effective Higgs couplings of the SM quarks and leptons (see App. B for details). The resulting Higgs couplings are diagonal in the mass basis and, moreover, proportional to the masses, just like in the Standard Model.
For model building it is interesting to compare the (gauge basis) Higgs Yukawa couplings to the mixing induced contributions to masses of the SM fermion generations. The muon and strange quark obtain mass predominantly due to mixing effects, and this is also reflected by their large off-diagonal couplings to the VL states. The tau lepton and b quark, by contrast, obtain almost all of their mass from the direct Higgs Yukawa coupling and mixing effects are miniscule.
CONCLUSION
We have studied the Standard Model extended by one complete family of "VL" fermions, including right-handed neutrinos, which are vector-like with respect to the Standard Model gauge group. In addition we have introduced a new "U(1) 3−4 " gauge symmetry under which the third SM family of quarks and leptons have U(1) 3−4 charge 1 while the left-chiral part of the new 4th family has charge −1. Hence, the model is free of gauge anomalies. In our analysis, we have only considered the mixing of the VL states with the second and third SM families, while the absence of mixing to the first family is motivated by a high scale flavor symmetry. The first family could always be included in a straightforward way without affecting our conclusions, and this should be done in the future in order to study the collider phenomenology of this model in more detail.
We have shown that this model can fit the anomalies in the muon g − 2 and b → sµ + µ − transitions without conflicting with other experimental data. The best fit points predict (cf. Tab. 3) new quarks, leptons and a family specific Z at the TeV scale, as well as testable effects in the lepton flavor violating processes τ → µγ and τ → 3µ. Furthermore, we find a significant enhancement of BR(B s → K ( * ) ττ ) and BR(B s → φττ ), while R νν K ( * ) is suppressed relative to the SM. Effects on other observables such as Z → µτ , h → µτ , h → γγ, h → gg, neutrino trident production, or PMNS unitarity violation are all robustly suppressed. The Higgs couplings in our model are to a very high degree SM-like and we have given an analytic proof of that.
A Neutrino See-Saw
The neutrino mass matrix is described in (21). In the limit M ∼ M L,R v Φ,ϕ v the eigenvectors of M ν are to a very good approximation given by Here we have used It is then straightforward to construct the diagonalization matrix U ν from the eigenvectors.
B Higgs Coupling Diagonalization
We wish to show that the Higgs couplings (40) are diagonal in the mass basis. We will focus on the charged leptons here, but the whole analysis of this section fully applies also to the d-type quarks by formally replacing λ E → λ D , λ L → λ Q , y τ → y b , and y µ → y s . Phenomenologically we are led to the relations v v Φ ∼ v ϕ and λ LR , y τ , y µ 1. Therefore, we will treat the lower 3 × 3 block of M in (13) as perturbation. The leading order mass matrix, hence, is given by where ξ := v/v Φ . The left-and right-singular vectors of this matrix are given by and the normalization factors N L(R),i are defined by the requirement |u L(R),i | 2 = 1. The matrices U L,R have these vectors as columns. The four corresponding singular values are given by where one can obtain analytic expressions for m E,L but we do not need them here and so they are not displayed. Numerically, U L,R ≈ U L,R and m E ≈ m E and m L ≈ m L hold at the percent level. Now regard the perturbation To leading order in perturbation theory the eigenvalues of the perturbed matrix are obtained by diagonalizing it with the zeroth-order left-and right-singular matrices. Following this procedure we find the small singular values of M to be with These values agree with the numerical values to O(1%). There are off-diagonal residuals which are given by They are numerically subdominant compared to the diagonal entries. The crucial point is that the Higgs couplings in the gauge basis (41) are, at least with regard to the lower 2 × 2 block, exactly of the same form as the perturbations to the mass matrix. Approximately diagonalizing the mass matrices with U L,R , hence, results in the Higgs couplings which are directly proportional to the mass matrices. There are slight deviations to the proportionality arising at higher order, but they are numerically not relevant.
Evaluating the effective Higgs couplings in the mass basis numerically for our best-fit point (I.) (cf. Tab. 2) one findŝ Clearly the off-diagonal corrections are negligible. Thus, the Higgs coupling to quarks and leptons is diagonal in the mass basis and, moreover, proportional to the masses, just like in the Standard Model.
Integrating out the VL Fermions
The fact that the Higgs couplings are very much SM like also holds for the effective mass matrix of the light (SM) fermions which is obtained after integrating out the heavy VL fermions, and we wish to show this analytically. The 4 × 4 mass matrix (13) can be rotated to a basis where there is a heavy 2 × 2 block. This is given by the transformation with and where In terms of 2 × 2 blocks M is then given by Upon integrating out the heavy states perturbatively we obtain the effective 2 × 2 mass matrix for the light states given by In particular one finds that the effective light masses are to very high accuracy linear in the Higgs VEV. There are higher order corrections in v but they are numerically irrelevant. Thus when one diagonalizes the fermion mass matrix one simultaneously diagonalizes the coupling of the Higgs field to fermions. Hence there are no significant flavor violating Higgs couplings.
C Details of δa Z µ
The leading order contribution to the muon g − 2 arising from the Z coupling to leptons as in (31) is given by (see e.g. [29,51]) where a runs over all leptons with mass m a in the loop,
D Details of τ → µγ
The leading order contribution to τ → µγ arises from the flavor off-diagonal Z couplings between τ − VL and µ − VL. We have used the general results given in [57] to find the leading order contributions to the partial width Γ(τ → µγ) = α g 4 1024 π 4 where a runs over all leptons with mass m a in the loop, x a := (m a /M Z ) 2 , and the loop functions are the same as in (85) | 6,477.8 | 2017-12-26T00:00:00.000 | [
"Physics"
] |
The Type VI Secretion System: A Dynamic System for Bacterial Communication?
Numerous studies in Gram-negative bacteria have focused on the Type VI Secretion Systems (T6SSs), Quorum Sensing (QS), and social behavior, such as in biofilms. These interconnected mechanisms are important for bacterial survival; T6SSs allow bacteria to battle other cells, QS is devoted to the perception of bacterial cell density, and biofilm formation is essentially controlled by QS. Here, we review data concerning T6SS dynamics and T6SS–QS cross-talk that suggest the existence of inter-bacterial communication via T6SSs.
INTRODUCTION
Bacteria are perpetually at war against multiple competitors and thus require weapons to conquer new territory or persist in an ecological niche. Among the mechanisms that aid in the struggle against other bacterial species are the Type VI Secretion Systems (T6SSs) (Hood et al., 2010). The T6SSs of Gram-negative bacteria are effector translocation apparatuses, resembling an inverted bacteriophage-puncturing device, composed of at least 13 proteins called core-components (TssA-M, for Type six secretion) (Boyer et al., 2009;Silverman et al., 2012). Auxiliary components can be associated with these conserved proteins (Tag, for Type six secretion associated genes) (Shalom et al., 2007). T6SSs participate in a broad variety of functions, including virulence and antibacterial activity (Pukatzki et al., 2006(Pukatzki et al., , 2007Ma et al., 2009;Coulthurst, 2013;Sana et al., 2016). T6SSs also participate in metal ion uptake, such as that of iron, manganese, and zinc Chen et al., 2016;Lin et al., 2017;Si et al., 2017), conferring an advantage during bacteriabacteria competition. In this review, we provide an overview of the data on T6SS assembly and emphasize connections between T6SSs and bacterial communication.
T6SS DYNAMICS The Global Scenario
Type VI Secretion System contractile nanomachines allow bacteria to inject toxins directly into prey cell membranes or cytoplasm. The machinery of the T6SS is assembled in an orderly manner. It starts with membrane complex formation, allowing baseplate positioning. The baseplate serves as a platform for contractile tail elongation. Contraction of the sheath propels effectors across membranes. Finally, the ATPase, TssH (ClpV), recycles the sheath and probably other T6SS components such as TssA, whereas the membrane-anchoring complex can be used to fire a new salvo (Figure 1). FIGURE 1 | Model of T6SS assembly and Hcp/VgrG/effector translocation and recycling. The membrane complex is anchored to the cell envelope with the help of a lytic transglycosylase (1a, 1b, and 2). While effectors are loaded onto their respective Hcp, VgrG, or PAAR proteins, the baseplate platform is positioned (3), aiding inner tube and sheath assembly (4 and 5). TssA interacts with the membrane complex, allowing contractile tail polymerization (3). Hcp or sheath components can be individually added during elongation (4 and 5). Following sheath contraction, Hcp, VgrG/PAAR, with their associated effectors, are pushed out and delivered into prey cells (6). ClpV disassembles the sheath and probably other T6SS components (7a) and the membrane complex may be used for a new assembly cycle using recycled components (7b). (A) Indicates exogenous Hcp and VgrG that can be recycled after isogenic T6SS aggression. (B) Indicates secretion of Hcp and VgrG into the culture medium. The question mark indicates a hypothesis.
Membrane Complex Assembly
Type VI Secretion Systems are anchored to the cell envelope by a membrane core complex (Durand et al., 2015), which serves as a T6SS docking station and platform for baseplate assembly and prevents membrane cell damage during effector injection. The membrane core complex is a 1.7 MDa structure with a fivefold symmetry composed of 10 heterotrimeric complexes containing the three proteins TssJ, TssL, and TssM. Hierarchical biogenesis of this complex is initiated by the insertion of the lipoprotein TssJ in the outer membrane (Aschtgen et al., 2008;Zoued et al., 2014). TssJ then interacts with the large periplasmic domain of the inner membrane protein TssM (Felisberto-Rodrigues et al., 2011;Nguyen et al., 2015). The cytoplasmic domain of TssM interacts with the inner membrane protein TssL and the cytoplasmic domain of another TssM subunit, thus enabling oligomerization (Logger et al., 2016). Similarly, the cytoplasmic domain of TssL mediates selfpolymerization (Durand et al., 2012;Zoued et al., 2016a). The TssM periplasmic domain recruits a lytic transglycosylase (LTG), which is required for local peptidoglycan layer degradation, necessary for proper assembly of the 1.7 MDa TssJLM complex (Figure 1, assembly steps 1 and 2) (Weber et al., 2016;Santin and Cascales, 2017). Associated proteins with a peptidoglycanbinding domain, such as TagL, which can bind to truncated TssL, can associate with the membrane complex. TagL corresponds in this case to an "ancestral TssL" domain (Aschtgen et al., 2010a,b).
Baseplate Complex Positioning
The T6SS baseplate complex, composed of TssA, TssE, TssF, TssG, TssK, and Valine-glycine repeat protein G (VgrG or TssI) proteins (Brunet et al., 2015), is recruited by the membrane complex (Zoued et al., 2013). This structure serves as a platform for contractile sheath assembly and is essential for the correct assembly of the inner tube, comprised of hexameric rings of Hemolysin-coregulated protein (Hcp or TssD) (Brunet et al., 2015). TssA forms a dodecamer complex, which first binds to the membrane complex (Zoued et al., 2016b) (Figure 1, assembly step 3). Positioning of the baseplate complex may be initiated by recruitment of TssE, TssK, and VgrG via TssA (Planamente et al., 2016;Zoued et al., 2016b). The cytoplasmic domains of TssM and TssL, located at the base of the membrane complex, interact with the TssG/TssK and TssK/TssE baseplate subunits, respectively (Zoued et al., 2013(Zoued et al., , 2016aLogger et al., 2016). The baseplate complex thus forms an interface between the membrane complex and the T6SS tail: both the Hcp and TssC sheath subunits interact with baseplate components (Brunet et al., 2015) (Figure 1, assembly step 4). After TssE recruitment, TssA likely properly attaches the sheath onto the baseplate and/or stabilizes the sheath structure (Planamente et al., 2016).
Elongation of the Contractile Tail
The TssE baseplate component may initiate sheath assembly and anchors the sheath to the baseplate (Kudryashev et al., 2015). Hcp proteins assemble into hexameric rings, stacked head-totail, under the control of baseplate components (Brunet et al., , 2015. The TssBC sheath component then wraps around the inner Hcp tube (Zoued et al., 2016b). Formation of the inner tube precedes TssBC assembly and is primordial for proper stacking of the subunits Kapitein et al., 2013;Brunet et al., 2014). The Hcp tube has an inner diameter of ∼40 Å (Mougous et al., 2006), forming a lumen with a neutral surface, suggesting passive effector translocation into the Hcp tube . The diameter of the internal sheath, measuring approximately 100-110 Å (Bönemann et al., 2009), coincides with the ∼80-85 Å outer diameter of the Hcp hexamer (Cascales and Cambillau, 2012). The TssA dodecamer is located at the distal end of the tail in Escherichia coli (Zoued et al., 2016b), whereas TssA1 is a component of the baseplate/tail subcomplex in Pseudomonas aeruginosa (Planamente et al., 2016). The TssA complex appears to possess short, flexible arm-like extensions, which may grasp TssBC or Hcp and incorporate them, one by one, at the distal end of the contractile tail (Zoued et al., 2016b). Moreover, the diameter of the central channel of the ring-shaped TssA structure measures approximately 100 Å (Planamente et al., 2016), comparable to the dimension of the Hcp hexamers. Hcp components perhaps pass through the large central lumen of the TssA dodecamer (Figure 1, assembly step 4) and are added to the growing structure. Contrary to bacteriophages, the length of the T6SS tail does not appear to be controlled by a specialized protein (Zoued et al., 2014;Vettiger and Basler, 2016). The length of the T6SS tail can exceed 1 µm . It is possible that contact with the opposite cell membrane is the physical signal to stop tail elongation (Figure 1, assembly step 5). Clemens et al. (2015) demonstrated that the sheath of Francisella tularensis has a quaternary structure with handedness opposite to that of the contracted sheath of T4 phage tails. The sheath contracts within a few milliseconds , propelling the Hcp-VgrG spike and effectors, punching either indiscriminately or in a focused manner into neighboring bacteria. The sheath contracts and becomes shorter and wider than in the extended state . Once the sheath is contracted, the ClpV recognition motif of TssC, which is buried in the elongated state, becomes accessible (Bönemann et al., 2009;Pietrosiuk et al., 2011;Kapitein et al., 2013;Kube et al., 2014;Douzi et al., 2016), permitting TssBC recycling by the ATPase. Thus, TssB and TssC can be reused for a new round of sheath elongation (Figure 1, assembly steps 6 and 7). An alternative mode of sheath disassembly may involve the TagJ accessory protein (Forster et al., 2014). TagJ is structurally related to particular TssA C-terminal extensions (Planamente et al., 2016). In this case, TagJ interacts with TssB and recruits ClpV to the sheath (Forster et al., 2014). ClpV can also interact with TssA and may be involved in recycling TssA rings (Planamente et al., 2016).
Effector Translocation
The puncturing device, consisting of the VgrG trimer, is located at the top of the inner tube and may be crucial for piercing the prey cell envelope. The VgrG trimer sometimes terminates with a Pro-Ala-Ala-Arg (PAAR)-repeat-containing protein, sharpening the tip (Shneider et al., 2013;Bondage et al., 2016).
Effectors transported by T6SS fall into two groups: "specialized" effectors and "cargo" effectors (Cianfanelli et al., 2016). Specialized effectors are extension domains of a structural component, whereas cargo effectors interact directly with VgrG, PAAR, or Hcp proteins , with or without the help of accessory proteins (Alcoforado Diniz and Coulthurst, 2015;Liang et al., 2015;Unterweger et al., 2015). Four main classes of antibacterial effectors have been described, according to the target (Figure 1). Peptidoglycan targeting effectors are comprised of both Type six amidase effectors (Tae) and Type six glycoside hydrolase effectors (Tge) (Whitney et al., 2013). Type six lipase effectors (Tle) hydrolyse membrane phospholipids Flaugnatti et al., 2016), whereas Type six DNase effectors (Tde) have nuclease activity (Ma et al., 2014). Some toxins do not belong to any of these four main classes. Pore-forming toxins, such as VasX from Vibrio cholerae, disrupt the inner membrane integrity of target cells (Miyata et al., 2013). Whitney and collaborators identified a NAD(P) + glycohydrolase toxin in P. aeruginosa (Whitney et al., 2015). This toxin depletes cellular NAD(P) + levels and induces bacteriostasis. The T6SS is not only an injection mechanism, it also enables the release of a proteinaceous metallophore into the extracellular medium and plays a role in the transport of Mn 2+ under conditions of oxidative stress (Si et al., 2017) and iron uptake Lin et al., 2017).
Bacteria that secrete antibacterial toxins also produce immunity proteins, which interact with toxic effectors, to allow self-protection and prevent the killing of sibling cells (called Tai, Tgi, Tli, and Tdi, corresponding to their effector family). Immunity proteins and effector targets are located within the same cellular compartment . Therefore, tli genes encoding outer membrane lipoproteins or periplasmic exposed lipoproteins, the Tle, should act in the periplasm (Figure 1). Some other proteins secreted by the T6SS are involved in selfrecognition, allowing communication between bacteria (Wenren et al., 2013;Cardarelli et al., 2015;Saak and Gibbs, 2016). In bacteria, secreted proteins are involved in many functions and are essential for bacterial fitness (Maffei et al., 2017). In some strains, the T6SS is activated in response of T6SS aggression by neighboring bacteria during cell-cell contact. Thus, the T6SS can modulate the fitness of other bacteria. In addition, T6SSs can be active, even in pure culture, and the presence of Hcp and VgrG in the culture medium is often used as evidence of a functional T6SS (Pukatzki et al., 2006. What purpose, however, does a functional T6SS serve in the absence of a competitor or prey?
IS THE T6SS INVOLVED IN CELL-TO-CELL SIGNALING AND COMMUNICATION? Prelude
Pseudomonas aeruginosa is a widely-used model for T6SS studies. P. aeruginosa possesses three T6SS clusters comprised of TssA-M core component proteins. They are called H1-T6SS, H2-T6SS, and H3-T6SS. Among them, the H1-T6SS machinery is the most widely studied and is involved in antibacterial activity (Hood et al., 2010). The H2-T6SS and H3-T6SS are involved in virulence in eukaryotes (Lesic et al., 2009;Sana et al., 2012Sana et al., , 2015 but also display antibacterial activity by secreting trans-kingdom effectors, such as PldA and TplE via the H2-T6SS or PldB via the H3-T6SS machinery Jiang et al., 2014Jiang et al., , 2016. "Dueling" and "Tit-for-Tat" Two types of T6SS behavior can be distinguished: that of defensive (targeted firing) and offensive cells (arbitrarily firing). P. aeruginosa can discern T6SS-mediated aggression of neighboring sister cells . Similarly, P. aeruginosa can perceive T6SS attacks coming from V. cholerae or Acinetobacter baylyi cells . In both cases, P. aeruginosa is first assaulted by a nearby bacterium and then attacks, in turn, the aggressive cell. This mechanism is called "T6SS dueling" and is mediated by the H1-T6SS. Thus, P. aeruginosa only counterattacks in response to T6SS firing of V. cholerae or A. baylyi. In general, P. aeruginosa does not target T6SS-defective bacteria, suggesting that it is a defensive bacterium, in contrast to offensive V. cholerae and A. baylyi strains. However, P. aeruginosa strains affected in the hybrid sensor kinase RetS, can attack T6SSdefective bacteria in a H1-T6SS-dependent manner (Hachani et al., 2013. The perception of a T6SS attack involves the TagQRST threonine phosphorylation pathway, following envelope perturbation after T6SS-mediated perforation Casabona et al., 2013;Ho et al., 2013) (Figure 2). Indeed, Polymyxin B, which alters cell membranes of Gramnegative bacteria, mediates activation of the T6SS, confirming that envelope perturbation triggers the T6SS counterattack . The TagQRST trans-membrane signaling cascade, essential for activation of the H1-T6SS of P. aeruginosa, is composed of four proteins. The ABC transporter complex TagST, anchored to the inner membrane, has ATPase activity (Casabona et al., 2013). In this complex, TagT is required for T6SS activation after cell membrane perturbation . TagQ, an outer membrane lipoprotein, is necessary for outer membrane localisation of TagR, which is required for protein kinase PpkA phosphorylation (Hsu et al., 2009). PpkA phosphorylates, in turn, Fha1 (which has a forkhead-associated domain), thus activating H1-T6SS assembly (Figure 2). The phosphatase PppA counteracts the role of PpkA by dephosphorylating Fha1 (Mougous et al., 2007). T6SS dueling appears to be an indirect means of communication, in which the occurrence of T6SS attacks may correlate with cell density. As the population increases, the likelihood of targeting sister cells also intensifies. In other words, as the population grows, the incidence of T6SS aggression rises. Thus, the perception of T6SS attacks provides an overall view of bacterial density and a form of social interaction (Figure 2). The GacS/GacA System and the Interplay between the T6SS and Quorum Sensing in P. aeruginosa Two-Component Signal Transduction systems (TCSTs) are involved in external signal perception via a "sensor" and translate the signal via a "response regulator." Thus, TCSTs play a key role in adaptive responses during environmental stress. GacS/GacA is a TCST in P. aeruginosa that perceives unknown signals and regulates numerous networks (Reimmann et al., 1997). The GacS/GacA cascade activates the transcription of the small RNAs (sRNAs) RsmZ and RsmY. RsmZ has a high affinity for the RNA-binding protein RsmA and can sequester it, permitting the translation of genes encoding H1-T6SS mRNAs. A rsmZ mutation results in downregulation of the transcription of genes encoding the H1-T6SS and H3-T6SS of P. aeruginosa (Brencic et al., 2009;Moscoso et al., 2011). The GacS/GacA system is under the control of two hybrid sensors, RetS and LadS. The hybrid sensor kinase RetS decreases RsmZ/RsmY transcription via the inhibition of GacS/GacA phospho-relay (Goodman et al., 2004). In contrast, LadS, enhances GacA phosphorylation via GacS . In summary, H1-T6SS is upregulated by the GacS/GacA/RsmZ regulatory pathway, which depends on the balance between RetS and LadS activation from external signals, unlike the H2-T6SS and H3-T6SS (Figure 2).
Quorum Sensing (QS) is a system that allows social synchronization, based on the perception of population density, according to signal molecule concentration. QS is crucial for collective adaptive responses (similar to a social behavior) and regulates both bacterial virulence and biofilm formation (Deng et al., 2011). P. aeruginosa has four QS networks consisting of three classes of diffusible auto-inducers (Lee and Zhang, 2015). The first class includes two types of Homoserine Lactones (HSLs): N-(3-oxododecanoyl)-Homoserine Lactones (odDHL, 3-oxo-C12-HSL) and N-butyrylhomoserine Lactones (BHL, C4-HSL) controlled by the Las and Rhl cascades, respectively (Schuster and Greenberg, 2007). P. aeruginosa also produces Pseudomonas Quinolone Signal (PQS), 2-heptyl-3-hydroxy-4quinolone, of which the production is regulated by the PqsR cascade (also known as MvfR) (Cao et al., 2001). The last QS system consists of 2-(2-hydroxyphenyl)-thiazole-4-carbaldehyde, which is involved in the Integrated Quorum Sensing system (IQS) (Lee et al., 2013). The IQS can enhance PQS production depending on the P. aeruginosa strain (Lee et al., 2013;Sun et al., 2016). These QS networks are all interconnected and positively regulated by the Las cascade (Pessi et al., 2001;Lee et al., 2013). At the same time, RsmA is a negative post-transcriptional regulator of both 3-oxo-C12-HSL and C4-HSL production, affecting the Las/Rhl quorum sensing cascades (Pessi et al., 2001). Similarly, PqsR is post-transcriptionally repressed by RsmA (Kulkarni et al., 2014). However, the PQS system and Rhl cascade are upregulated via the Las pathway and PQS positively regulates the Rhl cascade (Rasamiravaka et al., 2015). In summary, the GacS/GacA system is a global activator of QS communication, because the action of the RNA-binding protein RsmA is jointly antagonized by RsmY and RsmZ (Kay et al., 2006) (Figure 2). At the same time, QS positively regulates H2-T6SS and H3-T6SS, whereas it suppresses H1-T6SS associated gene expression (Lesic et al., 2009).
A study published by Lin et al. (2017) revealed a link between the H3-T6SS and cell-cell signaling in P. aeruginosa. The cellcell signaling compound PQS contributes to the formation of Outer Membrane Vesicles (OMVs) and associates with vesicle membranes. The PQS in OMVs can capture iron from the extracellular medium. In parallel, the protein TseF, secreted by the H3-T6SS, associates with OMVs containing PQS. The TseF from the OMVs then interacts with the Fe(III)-pyochelin receptor FptA and the porin OprF. This enables the delivery of PQS, associated with iron, into bacterial cells. Thus, the effector TseF delivered by the H3-T6SS is involved in the PQS pathway (Figure 2).
T6SS and Social Behavior in Proteus mirabilis
Proteus mirabilis strains are able to recognize isogenic cells, coordinate multicellular swarming motility, and form macroscopic boundaries with non-sister cell swarms (Alteri et al., 2013). Macroscopic demarcation, called Dienes lines, can be observed between swarming P. mirabilis strains. Functional T6SSs are involved in this recognition phenomenon in the region of inter-strain contact. This visible boundary requires physical cell-cell interactions and is the result of T6SS recognition. Indeed, cells at the intersection between two swarming populations of P. mirabilis appear to kill each other using their T6SS effectors (Alteri et al., 2013). The T6SSs appear to assemble and fire deeply beyond the inter-strain boundary into the opposing swarming cells, thus enhancing T6SS effector injection (Alteri et al., 2013). Some Identification of self (Ids) proteins, involved in self-recognition and territorial behavior, are exported by the T6SS (Wenren et al., 2013). For example, IdsD mediates identity recognition between neighboring cells in a T6SS-dependant manner. IdsD interacts specifically with the cognate IdsE protein on the surface of recipient cells. The specific interaction between the two membrane-bound self-recognition proteins IdsD and IdsE drives social behavior (Cardarelli et al., 2015). These binding interactions contribute to the definition of strain identity and discrimination between self and neighboring non-self cells. The lack of binding between IdsD and IdsE correlates with the formation of the visible boundary. The authors speculate that IdsE itself contributes to the repression of swarm colony expansion. Interaction between the two cognate proteins reduces swarm restriction (Saak and Gibbs, 2016). IdsD and IdsE proteins may constitute a lethal effector-immunity (toxin-antitoxin) system. Contrary to QS, which is based on contact-independent recognition, T6SS-associated recognition generally requires cell-to-cell contact (Saak and Gibbs, 2016). P. mirabilis uses the T6SS to discriminate between strains, coordinate multicellular swarming behavior, and direct its collective movement. Thus, the T6SS is essential for boundary formation and mediates cell-cell communication of swarming P. mirabilis via specific self-identity determinants.
The T6SS of the P. fluorescens MFE01 Strain The P. fluorescens MFE01 strain, like numerous other P. fluorescens strains, does not produce the QS signals of P. aeruginosa (no HSL or PQS) (Gallique et al., 2017). MFE01 is an aggressive T6SS strain which contains a unique T6SS core component cluster and three orphan hcp genes (Decoin et al., 2014(Decoin et al., , 2015. The MFE01 T6SS is involved in biofilm formation and maturation (Gallique et al., 2017), as shown for other T6SSs (de Pace et al., 2011;Sheng et al., 2013;Lin et al., 2015;Tian et al., 2015). Indeed, P. fluorescens MFE01 is unable to form biofilms once the T6SS machinery is inactive (in a tssC mutant), whereas individual mutations of the three hcp genes affect biofilm maturation, but not formation. Intra-bacterial cooperation in conditions of biofilm formation via T6SS dueling could occur. Indeed, alterations of membrane phospholipid composition increase the ppGpp stress-response signal, which in turn causes the premature production of HSL-QS signals, including in P. aeruginosa (Baysse et al., 2005) (Figure 2). Similarly, communicating pathways could be activated following membrane perturbation due to T6SS perforation in MFE01 strain during "tit-for-tat" interactions.
CONCLUSION
A recent study showed that bacteria can reuse T6SS components from attacking cells for new T6SS assembly (Vettiger and Basler, 2016) (Figure 1). This suggests that an increase in cell density increases the concentration of T6SS components in bacteria and the ability of the cell to fire again, forming a positive feedback loop. We postulate that T6SS could be a cell-tocell signal between sibling cells, depending on cell density, similar to the QS pathway, especially in bacteria devoid of QS signals.
AUTHOR CONTRIBUTIONS
Writing, review, and editing: MG, AM, MB.
FUNDING
This study was supported by GRR CBS funds from the Region Haute-Normandie, SFR Normandie Végétale (NORVEGE) funds, GEA (grand Evreux agglomeration) funds, and FEDER funds. | 5,110 | 2017-07-28T00:00:00.000 | [
"Biology"
] |
xCOLD GASS and xGASS: Radial metallicity gradients and global properties on the star-forming main sequence
Context. The xGASS and xCOLD GASS surveys have measured the atomic (H i) and molecular gas (H2) content of a large and representative sample of nearby galaxies (redshift range of 0.01 < z < 0.05). Methods. We present optical longslit spectra for a subset of the xGASS and xCOLD GASS galaxies to investigate the correlation between radial metallicity profiles and cold gas content. In addition to previous data, this paper presents new optical spectra for 27 galaxies in the stellar mass range of 9.0 ≤ log M? [M ] ≤ 10.0. Methods. The longslit spectra were taken along the major axis of the galaxies, allowing us to obtain radial profiles of the gasphase oxygen abundance (12 + log(O/H)). The slope of a linear fit to these radial profiles is defined as the metallicity gradient. We investigated correlations between these gradients and global galaxy properties, such as star formation activity and gas content. In addition, we examined the correlation of local metallicity measurements and the global H i mass fraction. Results. We obtained two main results: (i) the local metallicity is correlated with the global H i mass fraction, which is in good agreement with previous results. A simple toy model suggests that this correlation points towards a ‘local gas regulator model’; (ii) the primary driver of metallicity gradients appears to be stellar mass surface density (as a proxy for morphology). Conclusions. This work comprises one of the few systematic observational studies of the influence of the cold gas on the chemical evolution of star-forming galaxies, as considered via metallicity gradients and local measurements of the gas-phase oxygen abundance. Our results suggest that local density and local H i mass fraction are drivers of chemical evolution and the gas-phase metallicity.
Introduction
Theory and observations support a scenario where galaxy growth is tightly linked to the availability of cold gas. Several key galaxy scaling relations can be explained by an 'equilibrium' (or 'gas regulator') model, in which galaxy growth self-regulates through accretion of gas from the cosmic web, star formation, and the ejection of gas triggered by star formation and feedback from active galactic nuclei (AGN, see e. g. Lilly et al. 2013;Davé et al. 2013). Galaxy-integrated scaling relations successfully reproduced by this model range from the mass-metallicity relation (Zahid et al. 2014;Brown et al. 2018), the baryonic Galaxy measurements and calibrated 2D spectra of low mass galaxies are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc. u-strasbg.fr/viz-bin/cat/J/A+A/649/A39 Based on observations made with the EFOSC2 instrument on the ESO NTT telescope under the programme 091.B-0593(B) at Cerro La Silla (Chile). mass fraction of halos (Bouché et al. 2010), and the redshift evolution of the gas contents of galaxies (Saintonge et al. 2013).
The next logical step is to explore how this gas-centric galaxy evolution model performs in explaining the resolved properties of galaxies. This is a particularly timely question as integral field spectroscopic (IFS) surveys continue to provide detailed maps of the stellar and chemical composition of large, homogeneous, representative galaxy samples. The Calar Alto Legacy Integral Field Area survey (CALIFA; Sánchez et al. 2012), the Sydney-AAO Multi-object Integral field galaxy survey (SAMI; Croom et al. 2012), and the Mapping Nearby Galaxies at Apache Point Observatory survey (MaNGA; Bundy et al. 2015), for instance, focus on samples of hundreds to thousands of galaxies in the nearby Universe. Similar surveys at z > 1 are also now possible with IFS instruments operating in the nearinfrared such as KMOS (e.g. the KMOS3D and KROSS surveys, Wisnioski et al. 2015 andStott et al. 2016, respectively).
Observations of colour and star formation rate (SFR) across the discs of nearby galaxies suggest a scenario in which galaxies form and quench from the inside out; overall, the outskirts of disc galaxies tend to be bluer (De Jong 1996;Wang et al. 2011;Pérez et al. 2013) and remain star-forming for longer (Belfiore et al. 2017a;Medling et al. 2018).
Alongside star formation (SF) profiles, a great deal insight can be gained by focusing on spatial variations of the chemical composition of the gas. In general, gas at the outskirts of a galaxy tends to be more metal-poor than at its centre (e.g. Searle 1971;Shields 1974;Sánchez et al. 2014). Chemo-dynamical models of galaxies suggest different underlying physical processes to explain the formation of metallicity gradients. Early on Matteucci & François (1989) found, via Galactic models, that the inflow of metal-poor gas is vital to the formation of metallicity gradients. The chemical evolution models by Boissier & Prantzos (1999) for the Milky Way and by Boissier & Prantzos (2000) for disc galaxies additionally emphasise the role of radial variation of star formation rate and efficiency, as well as inside-out-growth for the formation of metallicity gradients. Furthermore, radial gas flows are found to be vital in reproducing the metallicity gradients of the Milky Way (Schoenrich & Binney 2009) Within the 'equilibrium' framework, such metallicity gradients would be explained by the accretion of metal-poor gas onto the outer regions of these galaxies. Pezzulli & Fraternali (2016) showed that metallicity gradients already form in closed-box models due to the fact that denser (i.e. more central) regions of galaxies evolve faster than less dense (outer) regions. However, to arrive at realistic metallicity gradients, their analytical model requires radial gas flows and, to a lesser extent, also inside-out growth.
Metallicity gradients are common and there are many explanations of their presence. In particular, several physical processes (inside-out growth, radial flows, gradients in the star formation efficiency) have been predicted to give rise to metallicity gradients and it is difficult to assess their relative importance. It is also unclear whether or not the direction and strength of metallicity gradients depend on global galaxy properties. For example, Sánchez-Menguiano et al. (2016) and Ho et al. (2015) find no relation between metallicity gradients and the stellar mass of the galaxies. They argue that metallicity at a certain radius is only determined by local conditions and the evolutionary state of the galaxy at that radius, rather than by global properties. There are however analyses finding correlations (both positive and negative) between stellar mass and metallicity gradients. Poetrodjojo et al. (2018), Belfiore et al. (2017b) and Pérez-Montero et al. (2016) find hints of flatter metallicity gradients in lower mass galaxies, while Moran et al. (2012;hereafter M12) find that galaxies at the lower mass end of their sample have steeper gradients. We note, however, that the low-mass end of M12 is at 10 10 M , where Belfiore et al. (2017b) find a turnover in the strength of the metallicity gradient, such that both more massive and less massive galaxies show flatter metallicity gradients than galaxies with masses around 10 10 M . In addition, the sample from Poetrodjojo et al. (2018) only includes galaxies with stellar masses below 10 10.5 M .
As it is based on an extensive longslit spectroscopy campaign rather than IFU maps, the M12 study may lack the full mapping of metallicity across the galaxy discs, but it does benefit from having access to direct measurements of the cold gas contents of the galaxies through the GALEX Arecibo SDSS Survey (GASS) and CO Legacy Database for GASS (COLD GASS) surveys (Catinella et al. 2010;Saintonge et al. 2011). Their finding is that metallicity gradients tend to be flat within the optical radius of the galaxies, but that the magnitude of any drop in metallicity in the outskirts is well-correlated with the total atomic hydrogen content of the galaxy. This provides support for a scenario where low-metallicity regions are connected to the infall of metal-poor gas, as also found by Carton et al. (2015). Indeed, the chemical evolution models of Ho et al. (2015), Kudritzki et al. (2015) and Ascasibar et al. (2015) (amongst others) are able to predict metallicity gradients from radial variations in the gas-to-stellarmass ratio. Using dust extinction maps derived from the Balmer decrement to infer local gas masses, Barrera-Ballesteros et al. (2018) find a relation between the radial profiles of gas to stellar mass ratio and metallicity that is in good agreement with the predictions from the local gas-regulator model (similar to the global model, but on local scales).
In this paper, we revisit the results of M12 but for an increased sample of galaxies, which crucially extents the stellar mass range by an order of magnitude. This is achieved by combining new optical longslit spectra for galaxies in the stellar mass range of 9 < log M [M ] < 10 with global cold gas measurements from the xGASS and xCOLD GASS surveys Saintonge et al. 2017). The sample studied here, while lacking spatially resolved gas observations, is larger than those studied by Ho et al. (2015), Kudritzki et al. (2015) and Carton et al. (2015), and it benefits from direct, homogeneous CO and HI observations. This paper is organised as follows: In Sect. 2, we present the galaxy sample and auxiliary data from the Sloan Digital Sky Survey (York et al. 2000) and the xGASS/xCOLD GASS surveys. Details on observation, data reduction and data analysis are provided in Sect. 3. Our results are presented in Sects. 4 and 5. We discuss these results and offer our conclusions in Sect. 6. Throughout the paper we assume a standard ΛCDM cosmology (H 0 = 70 km s −1 Mpc −1 , Ω M = 0.30 and Ω Λ = 0.70) and a Chabrier initial mass function (Chabrier 2003).
Sample selection and global galaxy properties
The extended GASS (xGASS, Catinella et al. 2018) and the corresponding extended COLD GASS (xCOLD GASS, Saintonge et al. 2017) surveys are projects designed to provide a complete view of the cold atomic and molecular gas contents across the local galaxy population with stellar masses in excess of 10 9 M . The survey galaxies were randomly selected from the parent sample of objects in the SDSS DR7 spectroscopic catalogue (Abazajian et al. 2009), with 0.01 < z < 0.05 and log M [M ] > 9.0, and located within the footprint of the Arecibo HI ALFALFA survey (Giovanelli et al. 2005;Haynes et al. 2018). No additional selection criteria were applied, making the sample representative of the local galaxy population. As shown in Fig. 1, it samples the entire SFR-M plane. The xGASS survey provides total HI masses for 1200 galaxies and xCOLD GASS derived total molecular gas masses from CO(1-0) observations of a subset of 532 of these. A complete description of the sample selection, observing procedures and data products of xGASS and xCOLD GASS can be found in Catinella et al. (2018) and Saintonge et al. (2017), respectively.
In addition to the H i and CO measurements, optical longslit spectra were obtained for a subset of the xGASS/xCOLD GASS galaxies. For galaxies with stellar masses log M [M ] > 10, these data were obtained with the 6.5 m MMT telescope on Mount Hopkins, Arizona (182 galaxies) and the 3.5 m telescope at Apache Point Observatory (APO), New Mexico (51 galaxies, M12). For 27 galaxies in the stellar mass range of 9.0 < log M [M ] < 10.0, optical longslit spectra have been obtained with the EFOSC2 spectrograph at the ESO New Technology Telescope (NTT) in La Silla, Chile; these are new observations, Blue open circles represent the xGASS sample, light blue crosses the xCOLD GASS sample, and yellow open squares mark galaxies that were included in the M12 analysis but not in this work. Galaxies marked with a red symbol are included in the sample used in this paper, where diamonds represent galaxies with new observations and filled circles galaxies with observations from M12.
presented for the first time. These 27 galaxies were randomly selected from the xGASS parent sample. The only selection criterion was observability with the NTT.
In this work we combine the new NTT observations with data from M12. As can be seen in Fig. 1, all low-stellar-mass galaxies with optical spectra from the new NTT observations (red diamonds) are star-forming galaxies, meaning that they are located on or nearby the star formation main sequence (SFMS).
To obtain a uniform sample, only star-forming galaxies from the M12 sample are included in this work. This is achieved by selecting only those galaxies that are within ±1.5σ of the SFMS, as defined by Catinella et al. (2018) and described in more detail by Janowiecki et al. (2020). This selection criterion is applied at all stellar masses and a compromise between including as many low-mass galaxies as possible (20), as well as including only galaxies near a well defined SFMS. Combining the high stellar mass star-forming sample (86 galaxies) with the 20 new lowmass galaxies results in a sample of 106 galaxies. We note that for three high-mass galaxies, optical longslit data are available from both the MMT and APO. For these galaxies, we chose to use only the MMT spectra, as more reliable metallicity measurements are available at similar or larger galactocentric radii from the MMT data than from the APO data. Figure 2 shows the distribution of stellar mass (left panel), stellar mass surface density (middle panel), and NUV-r colour (right panel). As can be seen here, this work expands the work by M12 to lower stellar masses and includes more galaxies with low µ . As expected, due to the selection of SFMS galaxies, we include fewer high µ galaxies than in the M12 analysis and no quiescent galaxies.
The overlap between the samples with H i, CO, and optical spectroscopic measurements is not perfect because of the timing of the various observing campaigns. Of the 106 galaxies in our sample of main-sequence galaxies with longslit optical spectra, 99 have H i measurements from Arecibo and 76 have CO observations from the IRAM-30 m telescope. When correlating measurements from optical longslit spectra with H i or CO observations, those galaxies lacking information are excluded. In Fig. 3, the H i and H 2 gas mass-to-stellar mass ratios are shown as a function of stellar mass. The galaxies selected for this study have the typical gas fractions of main-sequence galaxies.
Observations
The optical longslit spectra of the 27 low-mass galaxies were obtained with the EFOSC2 spectrograph at the ESO New Technology Telescope (NTT) in La Silla, Chile in September 2012 and April 2013. The slit size was 1.5 arcsec by 4 arcmin and aligned along the major axis of each galaxy. In order to measure all the strong emission lines required for metallicity measurements, ranging in wavelength from [OII]372.7 nm to Hα at 656.3 nm, two observations of every galaxy were needed, one for the bluer half of the spectrum (368.0 nm to 550.0 nm) and one for the redder half (535.0 nm to 720.0 nm). The overlap was used to check for consistency in flux calibration across the entire wavelength range. Individual science exposures were observed for 900 s. The total exposure time varied according to the surface brightness of the galaxy, but amounted on average to 3600 s per spectrum half. After including a binning factor of 2, the image size is 1024 pixels by 1024 pixels. The spectral resolution is 0.123 nm and 0.113 nm in the red and blue halves of the spectrum, respectively, which is approximately equivalent to a velocity resolution of 59 and 74 km s −1 .
The raw spectra were first reduced with standard iraf procedures. Bias images, dome, and sky flats were taken at the beginning of each night. Bias images were subtracted from the science images, as is the dark current, which was estimated from the overscan regions of the science exposures. To obtain the overall flat field correction, both dome and sky flats were observed. First the spatial flattening was calculated from dome flats and the spectral flattening from sky flats. Then these two were multiplied to get a master flat field, which was applied to all the science images from a given night of observing. Since all exposures for one of the two spectral setups of each galaxy were obtained in one night, it was possible to stack individual frames at this point. During the stacking process cosmic rays were removed by an outlier rejection algorithm, and any remaining ones then removed manually.
Wavelength and flux calibration as well as straightening the image along the slit, were then performed on the stacked spectra. For wavelength calibration observations of a HeAr lamp in addition to the sky lines were used. The flux calibration was based on observation of multiple standard stars per night. The standard stars for the September 2012 run were EG 21, Feige 110 and LTT 7987, whereas during the April 2013 run, the standard stars EG 274, Feige 56, LTT 3218, LTT 6248 were observed.
Extraction of spatially resolved optical spectra
To extract spatially resolved, one-dimensional spectra from the reduced two-dimensional spectra, we used the same pipeline as M12. First, the two spectral halves were merged and possible flux mismatches were removed. Next, a rotation curve was fitted to absorption line measurements, thus it was possible to conduct all following steps in the rest frame. After that, the spectrum was spatially binned, starting in the centre and moving outwards. The size of the spatial bins was chosen such that a minimum continuum signal to noise ratio (S/N) of 5 was reached. This binning procedure resulted in one dimensional spectra covering a certain radial range at a certain radial position of the galaxy (see Fig. 4). Fig. 1. If a galaxy has not been detected either in H i or CO, its upper limit is shown as an arrow in the respective panel.
The stellar continuum of all the one dimensional spectra extracted in the previous step were then fitted with a superposition of simple stellar population models (Bruzual & Charlot 2003). The best-fitting continuum model was subtracted from the spectrum and the remaining emission lines were fitted with Gaussian functions (Tremonti et al. 2004). This process results in measurements of emission lines and stellar continuum at different galactocentric radii. An example for a typical spatially resolved spectrum with the fits to stellar continuum and emission lines is given in Fig. 4.
Measuring radial metallicity profiles and gradients
With emission lines measured at different galactocentric radii, we are able to measure the gas-phase metallicity for each radial bin. After correcting the emission line fluxes for extinction M12, the gas-phase oxygen abundance was measured from ratios of the [OIII]λ = 500.7 nm, Hβ, [NII]λ = 658.4 nm and the Hα emission line flux following the prescription by Pettini & Pagel (2004) There are many metallicity calibrators with different zero points, which were previously proposed and discussed in the literature (see e. g. Kewley & Ellison 2008). Given the available emission lines in our observations and that metallicity calibrators based on the O3N2 ratio are considered robust and are widely used in the literature, we focus on these types of metallicity estimators. In addition to the Pettini & Pagel (2004) prescription, we also use the Marino et al. (2013) O3N2 metallicity calibrator to make direct comparisons with other works (Sect. 4.3). However, since the Marino et al. (2013) O3N2 calibrator tends to underestimate high metallicities (Erroz-Ferrer et al. 2019), we used the Pettini & Pagel (2004) calibrator for the majority of the analysis. The same data products derived with the same pipeline are available for the M12 galaxies. Thus, all the following analysis steps were performed for both the M12 and the new data.
The analysis procedure described in the previous sections provided radial profiles of the gas-phase metallicity. The radial variation of these profiles was quantified by the slope of a linear fit to the metallicity as a function of radius. To account for the varying sizes of galaxies and their different distances, the galactocentric, light-weighted radius of each radial bin was normalised or converted to kpc. For normalisation, the SDSS 25 mag arcsec −2 isophotal radius (R 25 ), Petrosian 90 percent radius (r 90 ) (Petrosian 1976) and the effective radius (r eff ) were used in r band. Using u or i band radii, that is, focusing on the young or old stellar population, does not affect the results. We therefore focus on radii measured in the r band only. These radii have been published with SDSS DR7 (Abazajian et al. 2009) and are taken from the MPA-JHU catalogue 1 . 12 + log(O/H) was then fitted as a linear function of r/r norm : using the scipy 2 (Virtanen et al. 2020) function curve_fit, which utilises the least squares-based Levenberg-Marquardt algorithm. As the first derivative of this function, ∆12 + log (O/H) was then defined as the radial gradient of the metallicity. In order to improve the reliability of the results, some radial bins were excluded from the analysis. We rejected any bin where AGN emission was significantly contributing to the ionisation. Those were identified by using strong emission line ratios to place the measurement in the [NII]/Hα versus [OIII]/Hβ Baldwin -Phillips -Terlevich diagnostic plot (BPT, Baldwin et al. 1981). Any radial bin with measured line ratios falling above the empirical threshold of Kauffmann et al. (2003) was excluded from the analysis. Furthermore, we required a signal-to-noise (S/N) detection of 3 for the four emission lines [O III]λ 500.7 nm, [N II]λ 658.3 nm, Hα and Hβ.
In previous studies (e. g. Sánchez et al. 2014;Ho et al. 2015 andSánchez-Menguiano et al. 2016), metallicity gradients were often calculated after discarding measurements within a certain galactocentric radius to avoid contamination by any active nucleus. While we disregard radial bins with AGN-like emission in general, we calculated metallicity gradients twice to allow for fair comparisons with these results: once using all reliable data points and once only considering measurements coming from the region outside of 0.5 times the effective r band radius r eff,r .
When requiring a minimum of three radial bins for gradient measurement, we measured gradients from the entire radial profile for 88 galaxies, and gradients from the radial profile between 0.5 and 2 r eff,r for 74 galaxies. Of these galaxies, 75 and 66 galaxies have stellar masses higher than 10 10 M , respectively.
In the following sections, we investigate correlations between metallicity gradients and the stellar mass (log M [M ]), stellar mass surface density (log µ , as a proxy for morphology), the concentration index (c = r 90 /r 50 , proxy for bulge to total mass ratio), specific star formation rate (sSFR = SFR/M ), NUV-r colour, atomic and molecular gas mass fraction (gas mass fractions are defined as log f Gas = log M Gas /M ), and the deficiency factor for atomic and molecular gas. Details on the derivation of these quantities are given in Saintonge et al. (2017) and Catinella et al. (2018). The deficiency factor is the difference between an estimate of the gas mass fraction from a scaling relation and the actually measured gas fraction. Here we use the best and tightest scaling relations available from the xGASS and xCOLD GASS analysis. For H i, this is the relation between log f HI and NUV-r colour, more specifically the binned medians from Table 1 in Catinella et al. (2018). For H 2 , we used the scaling relation between log f H2 and log sSFR based on the "Binning" values for the entire xCOLD GASS sample in Table 6 of Saintonge et al. (2017). In both cases we extrapolated between the bins to get an expected gas mass fraction. The deficiency factor is then: with f the gas mass to stellar mass fraction. Therefore a negative deficiency factor indicates that a galaxy is more gas-rich than the average galaxy sharing similar NUV-r colour or sSFR.
Results: Metallicity gradients
In this section, we present the results of a detailed analysis of the correlation between metallicity gradients and global galaxy . Correlation coefficients calculated for relations between metallicity gradients and global galaxy properties (colour code) with their error bars. The panels show, from left to right, correlation coefficients for metallicity gradients in units of dex r −1 eff,r , dex r −1 90,r dex R −1 25,r , dex kpc −1 . We note that all radii have been measured in the r band, similar results can be obtained for radii in the u and i band. For each global property four different correlation coefficients are presented, and are marked by the shape of the data point. The correlation coefficient were measured with gradients based on: (A) the entire radial profile for all galaxies in the sample; (B) the entire radial profile for massive galaxies; (C) the radial profile outside of 0.5 r eff,r for all galaxies in the sample; (D) the radial profile outside of 0.5 r eff,r for massive galaxies. Black dashed lines mark correlation coefficients of −0.3 and 0.3.
properties, in particular, star formation activity and gas content. We start by analysing and establishing which correlations between metallicity gradient and global galaxy property are of interest in our sample. In this process, we consider both gradients measured from profiles with all radial data points and gradients measured from profiles without data points inside of 0.5 r eff,r . Then we compare our results to the literature and discuss potential differences.
Investigating correlations between gradients and global galaxy properties
In order to test for the presence and strength of correlations between gradients and global galaxy properties, we applied multiple methods. Firstly, we calculated Spearman correlation coefficients between metallicity gradients and each global galaxy property. For those global galaxy properties that have correlation coefficients that significantly depart from zero, we obtained (semi-) partial Spearman correlation coefficients. These are correlation coefficients that take into account the intercorrelation between various global galaxy properties. Through a backward elimination based on the results of a multiple linear regression, we searched for the most important global galaxy property to determine the metallicity gradients. We trained a random forest model to predict metallicity gradients from those global galaxy properties that have correlation coefficients significantly different from zero.0. Then we asked the model which feature was most important in predicting the metallicity gradient.
Spearman correlation coefficients
We measured correlation coefficients between metallicity gradients and global properties as Spearman R values and calculated their errors through bootstrapping: for a sample of n measurements, 0.8×n measurements were randomly drawn from the sample (with replacement) and their correlation coefficient was measured. This process was repeated 0.8 × n times. The error of the correlation coefficient was then set to the standard deviation of the sample of 0.8 × n correlation coefficients. The median bootstrapping error of all measured correlation coefficients amounts to 0.1. Therefore, for a relation to be further considered and analysed, an absolute correlation coefficient |R| > 0.3 was required (3σ different from 0). The absolute correlation coefficient |R| can have values between 0 and 1, where numbers closer to 1 present tighter and stronger correlations (or an anticorrelation if R is negative). Figure 5 shows Spearman R correlation coefficients for metallicity gradients (measured with and without the central 0.5 r eff,r ) and global properties. Each data point represents the correlation coefficient between one gradient (e. g. the gas-phase metallicity gradient in units of dex r −1 90,r ) and one global galaxy property (e. g. stellar mass). The shape of the data points indicates the dataset for which the correlation coefficient was measured. We note that diamonds and triangles (correlations with gradients measured from radial profiles outside of 0.5 r eff,r ) generally indicate less pronounced correlations than squares and circles (gradients measured from the full radial profile).
The median maximal radius, at which we can reliably measure metallicities, is 2.5 r eff,r for massive galaxies, and 1.5 r eff,r for low-mass galaxies. This means that when measuring metallicity gradients from the radial range between 0.5 and 2.0 r eff,r , we mostly exclude low-mass galaxies because they do not have enough (i. e. three or more) radial metallicity measurements between 0.5 and 2.0 r eff,r to fit a gradient. In order to understand the effect of looking at massive star-forming galaxies only, we also measured correlation coefficients for massive galaxies (log M [M ] > 10., cases (B) and (D) in Fig. 5). These correlation coefficients for massive galaxies are usually closer to 0 than the correlation coefficients for the entire sample. This points to a A39, page 6 of 26 scenario in which the observed trends are amplified by low-mass galaxies.
For now, we are focusing on all relations for which the correlation coefficient is larger than 0.3 or smaller than −0.3 and which are thus three times larger than the median error or in other words significantly different from zero. For gradients measured from the entire radial profile and when considering all galaxies for which a gradient could be measured (circles in Fig. 5), we find that the correlation coefficient is significantly different from zero for relations between metallicity gradient in units of dex r −1 eff,r and log µ , log M [M ], log f HI , NUV-r colour, concentration index c, and log SFR. The correlation coefficients for other metallicity gradients, namely, in units of dex r −1 90,r , dex R −1 25,r and dex kpc −1 show similar trends, which are generally weaker, however.
Overall, we have a wide radial coverage all the way out to 2 r eff,r for massive galaxies but the more central measurements of metallicity often cannot be used for metallicity gradient measurement as their location on the BPT diagnostic plot indicates that the emission lines are excited by AGN-like emission rather than the emission of star-forming regions. Hence, restricting a study only to consider massive galaxies already goes in the direction of analysing correlations for metallicity gradients measured only from data points outside of 0.5 r eff,r .
This analysis points towards a scenario in which the correlations we see between a metallicity gradient and global galaxy properties are affected by the radial location of the radial metallicity measurements, which are used for the metallicity gradient measurement. To further analyse this assumption, we only look at metallicity gradients, which were measured on radial profiles without the data in the inner 0.5 r eff,r . These data are shown as triangles and diamonds in Fig. 5. In this case, we find no correlation remaining with absolute correlation coefficients |R| > 0.3, except for the ones between metallicity gradients in units of dex r −1 eff and log µ for all galaxies and between metallicity gradients in units of dex kpc −1 and log µ , log M [M ] and NUV-r colour.
In the following sections, we delve deeper into a statistical analysis of these correlations. We are especially interested in understanding which correlation is primary and which are secondary effects. We only focus on metallicity gradients in units of dex r −1 eff , because these are widely used in the literature, they yield the tightest correlations and the other metallicity gradients behave similarly.
(semi-)Partial Spearman correlation coefficients
To further investigate the correlations found in the last section, we also considered (semi-)partial Spearman correlation coefficients, which provide the same information as the correlation coefficients introduced above, but they allow us to fold in inter-correlations between the global galaxy properties. This is achieved by holding one measurement constant while looking at the correlation coefficient of two other measurements. In practice, this means computing correlation coefficients between residuals. Since log µ and log f HI are correlated Brown et al. 2015) and both appear to be correlated with the metallicity gradient, we must control for log µ to evaluate the strength of the 'remaining' correlation between log f HI and the metallicity gradient. We performed this analysis for the ensemble of log µ , log M [M ], log f HI , NUV-r colour, concentration index, log sSFR and log SFR and their correlation to the metallicity gradients in units of dex r −1 eff,r . The global galaxy properties are presented here as all those properties that showed any significant correlation in the previous section. Furthermore, we add the concentration index, c, as it was found to be correlated with the metallicity gradient by M12. When controlling for these properties using the partial_corr implementation in the pingouin package 3 , we only find a strong correlation between log µ and metallicity gradients. This result holds for both, metallicity gradients measured from the entire radial profile and metallicity gradients measured without data in the central 0.5 r eff,r . For all other galaxy properties, the (semi-) partial correlation coefficients are significantly smaller than 0.3 and the majority of their 95 percent confidence intervals includes 0, that is, both a correlation and an anti-correlation would be possible.
Backward elimination
A second method to find the one variable in a set of features that contributes most (or most optimally) to predicting a result is backward elimination. We used backward elimination in the following way: we fitted a general ordinary least squares multiple linear regression, such that the metallicity gradient in units of dex r −1 eff,r was the dependent variable and was described as a linear combination of log µ , log M [M ], log f HI , NUV-r colour, concentration index, log sSFR, and log SFR (the same selection of global galaxy properties as in the previous section) plus a constant. Then we took a look at the statistics of this model (as provided by the OLS module of the statsmodel package, Seabold & Perktold 2010). These statistics provide among other measures a p-value for the T -statistics of the fit. If this p-value is large for one of the variables, then this variable is likely not useful in the fit. In the context of our backward elimination, we used the p-value in the following, iterative way: after the first multiple linear regression, we eliminated the variable with the largest p-value, then ran the fit again without the eliminated variable. We continued to eliminate variables and run the fit until the p-values of all remaining variables were below 0.05, which is a p-value commonly judged as statistically significant.
Applying this procedure to our data returned the following results: for metallicity gradients measured on the entire radial profile, the backward elimination leaves log µ and log f HI , with the importance (i. e. the coefficient) of log µ twice the one of log f HI . For metallicity gradients measured without data points at radii smaller than 0.5 r eff,r , only log µ remains with a p-value smaller than 0.05.
A caveat of this method is the underlying assumption of linear relations between metallicity gradients and global galaxy properties, which is not necessarily the case. We improve on this caveat in the next section by using a random forest regression.
Random Forest model
A random forest (Ho 1995) is a non-parametric, supervised machine learning technique made up of a set of decision trees. The result of a random forest is the mean of all decision trees in the forest and is thus generally more robust than a single decision tree. The aim of this analysis is to train a random forest to predict the metallicity gradient in units of dex r −1 eff,r from log µ , log M [M ], log f HI , NUV-r colour, concentration index c, log sSFR, and log SFR. Once the model is fully trained, we can ask what is the relative contribution of the different features to predicting the metallicity gradient.
We used the implementation provided by scikit-learn (Pedregosa et al. 2011) and trained the model to optimise the mean squared error. We allow for a maximum of 20 leaf nodes in the decision trees, use 160 decision trees and leave the default settings for all other parameters. As mentioned above, not all galaxies in our sample are equipped with all measurements and sometimes metallicity gradients could not be measured due to too few radial bins with sufficient emission line detections. Thus the samples to work with contain 81 (67) galaxies for metallicity gradients measured on the entire radial profile (only from data points outside of 0.5 r eff,r ). Of each sample, we use 80 percent of the galaxies for training purposes and 20 percent to test the resulting model. Tests after the training showed that metallicity gradients can be predicted with a mean absolute error of 0.06 (0.07) dex r −1 eff,r for metallicity gradients measured on the entire radial profile (only from data points outside of 0.5 r eff,r ), and the most relevant features for the prediction are log f HI and log µ in both cases.
Correlation between the metallicity gradient, stellar
mass surface density, stellar mass, and f HI With the statistical tests shown in the last sections, a scenario is building in which log µ is the main driver of metallicity gradients and log f HI may play a secondary rule. We present further investigations of these correlations and the correlation with stellar mass, because this one is best studied in the literature. We examined them by dividing the sample into five bins of the global property, such that each bin contained about the same number of galaxies. The average radial gradient in each bin was then estimated in two different ways: (i) a gradient measured from the average stacked metallicity profile based on the radial metallicity measurements of all galaxies in the bin, to be called a 'gradient of a stacked profile' and (ii) the median of all gradients measured from individual galaxy metallicity profiles, to be called 'the average gradient of individual profiles'. To obtain the gradient of the stacked profile, we take all radial data points of all galaxies within one bin and fit a line to all radial metallicity measurements that fulfil our quality criteria. Radial data points for all galaxies are weighted equally and radii are normalised or measured in kpc. The resulting correlations are shown in Fig. 6 (metallicity gradients measured on the entire radial metallicity profile) and Fig. 7 (gradient measured from the radial metallicity profile outside of 0.5 r eff,r ). The strongest correlation between global galaxy properties and metallicity gradients in our samples, in all cases, is observed with log µ : generally, galaxies with lower µ have steeper metallicity gradients than high µ galaxies, regardless of the radius normalisation or unit. When comparing these correlations to the ones obtained when measuring the metallicity gradients from radial profiles without the inner 0.5 r eff,r (Fig. 7, left column), an increase in scatter and overall flattening of the trends can be seen. This is also reflected in the correlation coefficients, which generally decrease from around 0.5 to around 0.2. A notable exception is the correlation coefficient between log µ and the metallicity gradient in units of dex r −1 eff,r (top row, left panel in Fig. 7), which is the only correlation coefficient that is larger than 0.3 in the analysis of gradients measured from profiles without the central 0.5 r eff,r .
The relation between M and metallicity gradients is weaker than the one between log µ and metallicity gradients. For gradients measured on the entire radial metallicity profile, we find correlation coefficients larger than 0.3 for all normalising radii (middle column, Fig. 6). The scatter is larger for gradients in units of dex R −1 25,r or dex r −1 90,r . In particular, for gradients in units of dex r −1 eff,r , it can be seen that the trend between stellar mass and metallicity gradients measured from the entire radial profile is driven by low-mass galaxies. Once we remove the inner 0.5 r eff,r from the metallicity profile for gradient measurement (middle column, Fig. 7), the resulting gradients do not correlate with stellar mass any longer: the scatter of gradients from individual profiles increases and both approaches to measuring binned, average gradients either produce a flat relation or scatter throughout the parameter space.
A further test whether stellar mass or µ is the more defining factor in determining the metallicity gradient was inspired by Fig. 5 from Belfiore et al. (2017b) but the results are inconclusive. The yellow symbols and lines in Figs. 6 and 7 show the expected gradients. For the relation between µ and metallicity gradient, we calculate the average stellar mass in each bin of µ and then extrapolate between the nearest M bins to get the expected metallicity gradient and vice versa. As the expected average metallicity gradients match with the measured average metallicity gradients, this test does not provide additional insights.
The third global property that we are considering here is the H i gas mass fraction. The strongest correlations are measured between gradients in units of dex r −1 eff,r . While correlation coefficients for gradients with normalising radii r eff are at least 3σ different from zero, we find that gradients with normalising radii r 90 , R 25 and in units of dex kpc −1 are only 2 to 3σ different from zero. Once moving from gradients measured on the entire radial metallicity profiles to gradients measured without the central 0.5 r eff,r , we find a similar behaviour as observed for the correlations between stellar mass and metallicity gradients: the scatter of the individual gradients increases, correlations of binned values either flatten or their scatter increases as well.
The sample selection and the resulting distribution of global galaxy properties can also affect correlations between metallicity gradients and global galaxy properties. As can be seen, in Fig. 2, for example the stellar mass range 9.0 ≤ log M [M ] ≤ 10.0, is more sparsely sampled. Hence, individual extreme and low-mass galaxies might significantly drive correlations. To show that this is not the case, we show the individual metallicity gradients in Figs. 6 and 7 as small grey symbols.
Comparison to MaNGA
As indicated in the introduction, the rise of large IFU surveys has provided large samples of local star-forming galaxies for which a metallicity gradient can be measured. In Fig. 8, the results of this paper are compared to data from MaNGA.
The MaNGA data for this comparison comes from the data release 15 and includes two value added catalogues: MaNGA Pipe3D value added catalog: Spatially resolved and integrated properties of galaxies for DR15 Sánchez et al. 2016a,b) and HI-MaNGA Data Release 1 (Masters et al. 2019). The Pipe3D catalogue includes gas-phase metallicity gradients measured in units of dex r −1 eff,r , the local gasphase metallicity measured at the effective radius, the total stellar mass, and global star formation rates (from Hα emission lines). We note that the metallicity estimator used in this MaNGA catalogue is the O3N2 estimator by Marino et al. (2013). While both the Marino et al. (2013;M13) and the Pettini & Pagel (2004;PP04, used Grey, open circles in the background, and teal filled circles show the data from this work for the case that metallicity gradients were measured from profiles without radial data points inside of 0.5 r eff . The bins within which median gradients were measured, were set to be equidistant in order to mitigate any effects of different distributions of the global properties. We note that we use the M13 O3N2 calibration in this figure. a consistent comparison, we use the M13 O3N2 method in every figure that includes MaNGA data (and note in the figure caption when this is the case). We combine the Pipe3D catalogue with the SDSS DR7 MPA-JHU catalogue 4 to obtain 50 percent Petrosian radii for all galaxies. Together with the stellar mass as given by the MaNGA team, we are thus able to calculate µ (see Sect. 2). In addition, we use the H i masses and upper mass limits as provided by Masters et al. (2019).
If we are only selecting those galaxies that have measured metallicity gradients, as well as at least an upper limit for the H i mass, a match in the MPA-JHU catalogue (for µ measurements), and which are within ±1.5σ of the Catinella et al. (2018) SFMS, we get a sample of 544 galaxies from the MaNGA data sets. For simplicity, we treat H i mass upper limits as their true value. In Fig. 8, we show the different correlations between metallicity gradient and global galaxy properties (from left to right: stellar mass surface density, stellar mass and H i mass fraction). As the distribution of our and the MaNGA galaxies in these properties are different, we fix the widths (0.5 dex) and centres of the bins of global galaxy properties. This approach simulates a flat distribution in stellar mass surface density, stellar mass and H i mass fraction for both the MaNGA and our sample. In each of these bins, we only use galaxies within a certain metallicity gradient percentile range (within the 16-84 percentile range) to remove extreme outliers, and refer to the corresponding quantities, for example, the mean gradient, as 'trimmed'. For each bin of galaxies, we thus calculate a trimmed mean gradient, standard deviation, error of the trimmed mean, and a median gradient. In Fig. 8, the median gradients are shown at the centre of the bin. The numbers in the bottom indicate how many galaxies contributed to the median. As trimmed means and medians agree within the standard deviation, we only show the median gradients.
We also use a second, more stringent percentile range (40-60) as a check of the initial, broader range. Although the numbers of galaxies drop significantly for the 40-60 percentile cut, the mean and median gradients trimmed in this way agree well between the 40-60 percentile cut method and the 16-84 4 https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/ percentile cut method. This means that the mild cut is sufficient to estimate a robust mean. The resulting trends agree with observations in Fig. 7. In the MaNGA data, correlations between the metallicity gradients and these global properties can also be seen. In most cases, our metallicity gradients, which were measured without data points at radii smaller than 0.5 r eff,r agree with with the MaNGA data, except for low-mass, low µ systems. It is interesting to note that generally the trend between metallicity gradients and log µ (galaxies with lower µ have steeper declining metallicity gradients), is also seen in the MaNGA data, except for the lowest log µ bin. Considering the distribution and location of the individual gradient measurements (grey symbols in Fig. 8), this shows that overall our sample covers a similar parameter space as the MaNGA measurements. There are three low-mass galaxies with steeper gradients than most MaNGA galaxies at the same stellar mass. We have placed both the MaNGA galaxies and our sample on various scaling relations to understand whether these galaxies are special with respect to star formation or gas content. However, this is not the case here (see also Appendix A and Fig. A.1). Belfiore et al. (2017b) investigated the correlation between metallicity gradients and stellar mass based on MaNGA data and found that the steepest (most negative) metallicity gradients are measured for galaxies with stellar masses around 10 10 to 10 10.5 M . Galaxies at lower and higher stellar masses have flatter radial metallicity profiles. This trend can also be seen in the middle column of Fig. 8. This is particularly interesting as the Belfiore et al. (2017b) results are not based on the Sanchez et al. value-added catalogue that we use on the present work.
Radial variation of the gas-phase metallicity
One reason why the correlations between metallicity gradient and M , f HI (and µ ) change depending on how the metallicity gradient is measured and which sub-sample is considered can be seen when taking the average shape of the radial metallicity profiles into account. To do so, a median radial metallicity profile was calculated for each bin of µ , M and f HI . One profile is the running median of all data points meeting the criteria to be included in the gradient fit of all galaxies within one bin of global galaxy property. These profiles are shown in Fig. 9. Median metallicity profiles in bins of different global galaxy properties. Each row of plots corresponds to one global galaxy property, from top to bottom: µ , M , and f HI . Each panel in a row shows average radial metallicity profiles of all galaxies within the bin of the global galaxy property, with the range given at the top of the panel. The dark shaded region corresponds to 0.5 ≤ r eff,r ≤ 2.0, which is the radial region within which MaNGA computes metallicity gradients. Circles show the median metallicity profiles of all galaxies and triangles the median metallicity profile of massive galaxies only (M > 10 10 M ). These profiles have been computed in radial bins, all of which have the same radial width. The small grey dots show the individual radial metallicity data points. The number in the bottom right corner of each panel indicates the percentage of radial data points located within the range 0.5 ≤ r eff,r ≤ 2.0. Fig. 9. As we can see, it is not only the metallicity gradient, but also the shape and y-axis intercept of the median metallicity profile that vary with µ , M and f HI . Overall, galaxies with lower µ , lower stellar masses and higher H i mass fractions have lower central metallicities, which is in agreement with the massmetallicity relation (see e.g. Tremonti et al. 2004;Bothwell et al. 2013;Brown et al. 2018). In addition, the median profiles of higher µ galaxies with higher stellar masses and lower H i mass fractions show a plateau or even decrease of metallicity within approximately 0.5 r eff,r . When fitting a line to such a profile with a plateau in the centre, the resulting slope will be flatter. Thus, the central metallicity measurements within 0.5 r eff,r affect the resulting metallicity gradient. This effect is enhanced by the fact that only approximately 50 percent of all radial data points are within the radial range 0.5 ≤ r eff,r ≤ 2.0. Another 30-40 percent of our radial metallicity data points are at radii smaller than 0.5 r eff,r . When measuring metallicity gradients including the inner 0.5 r eff,r , the results are thus significantly affected by these data. This effect has already been observed before by, for example, Rosales-Ortega et al. (2009) and Sánchez et al. (2014) and it is one reason why some studies dismiss central metallicity measurements in their gradient estimation.
We computed these profiles for different subsets of our galaxy sample (circles: all galaxies, triangles: only massive galaxies, i.e. M > 10 10 M ). In particular, for bulgy and relatively H i-poor galaxies, the radial profiles are dominated by massive galaxies and are consistent between the median profiles of all galaxies and massive galaxies only. For the lowest µ , H i-rich galaxies, we find that the median profile of all galaxies differs from the median profile of massive galaxies (see top left and bottom right panel in Fig. 9). Thus, the largest effect of lowmass galaxies on the measurements of average gradients is seen in these bins (smallest µ , most H i-rich). This effect together with low number statistics and the fact that some of our lowest mass galaxies have relatively steep gradients contribute to the discrepancy with MaNGA at low stellar mass surface densities and high H i mass fractions.
At this point, we found that all correlations are to some degree dependent on the sample selection and the definition of the metallicity gradient. Our analysis suggests that there is a correlation between metallicity gradients and µ for galaxies on the star formation main sequence, especially when measuring the gradient from the entire radial metallicity profile. Before we move on to discussing this finding in greater detail in Sect. 6, we explore the relation between local metallicity and global H i content.
Results: Local metallicity and global HI content
We now focus on the correlation between local gas-phase metallicity measurements and the global H i mass fraction as found by M12. With the new NTT data presented here, we are able to build on the findings of previous works.
Previously, M12 reported a correlation between the local metallicity at the edge of the stellar disc and the global H i mass as a function of metallicity measurements outside of 0.7r 90,r . The data points are colour coded according to their galactocentric radius normalised by r 90,r . We note that some galaxies have multiple metallicity measurements outside of 0.7r 90,r and would thus appear multiple times. The number in the lower left corner provides the Spearman correlation coefficient.
fraction. In Fig. 10, we added the data of the new low-mass galaxies and find that the correlation holds. For MaNGA galaxies, only local metallicity measurements at one effective radius are provided in the value-added catalogues. Together with all those galaxies from our sample, which have a metallicity measurement within ±10 percent of the effective radius in r band, the correlation between local metallicity around the effective radius and the global H i mass fraction is shown in Fig. 11. Again, a correlation is recovered. In summary, we find that local metallicity correlates with global H i mass fraction.
One way to explain these correlations between local metallicity and global H i mass fraction is suggested by the following simple model. We assume an exponential stellar disc, such that the stellar mass surface density is given by: and the total stellar mass by: where r 0, is the stellar scale length and Σ 0, the central stellar column density. For the H i disc we describe the H i mass by: where r HI is the H i disc size and Σ 0,HI the (central/ constant) H i column density. Furthermore, we assume a local closed-box model where the local metallicity Z at radius r can be described as (see e. g. Mo et al. 2010): Z(r) = −y eff ln Σ HI (r) + Σ H2 (r) Σ HI (r) + Σ H2 (r) + Σ (r) , with y eff the effective yield and the Σ(r) the local column densities of H i, H 2 and stars at radius r. To evaluate this equation at the effective radius r eff , we take into account that ( (Pilyugin et al. 2004) and the dotted lines a stellar yield of 0.037 (Vincenzo et al. 2016 and references therein). We note that we use the M13 O3N2 calibration in this figure.
According to Broeils & Rhee (1997), for instance, there is a good correlation between the radius of the H i and the stellar disc for spiral galaxies. Thus, this (local closed-box) model suggests indeed a correlation between the local metallicity at the effective radius and the H i mass fraction. Since the stellar scale length is also tightly correlated to r 90 , a similar calculation can be carried out for Fig. 10. In Fig. 11, we added the model prediction assuming a stellar oxygen yield of 0.037 Vincenzo et al. (2016) from the Romano et al. (2010) and Nomoto et al. (2013) stellar models assuming a Chabrier (2003) initial mass (dotted lines) and an effective oxygen yield of 0.00268 measured by Pilyugin et al. (2004) (dashed lines) in spiral galaxies. Furthermore, we follow De Vis et al. (2017) to convert between metallicity mass fraction and metallicity number density fractions. We note that we show both an example for true stellar yields and one for an effective yield. When using the effective yield small amounts of in-and outflows are included in this toy model. With the stellar yield this is a pure closed-box model. In addition, we show two different ratios of r HI /r 0, = 3.3, and 5.6 in yellow and red, respectively. These ratios are approximately equivalent to r HI /R 25 = 1.0, and 1.7, with r HI /R 25 = 1.7 (red line) the preferred value by observations of spiral galaxies (Broeils & Rhee 1997). More recent measurements of this ratio by Wang et al. (2016) Fig. 5 of M12. The dark crosses in the background are data by M12, the yellow points our linear fits to the metallicity profiles of massive, quiescent galaxies (off the SFMS) and the orange points our linear fits to the metallicity profiles of massive, star-forming galaxies (on the SFMS).
Metallicity gradients
We analysed the radial metallicity profile of a sample of star formation main sequence galaxies from the xGASS and xCOLD GASS galaxy sample and investigated the correlation with global galaxy properties, such as H i and H 2 gas mass fraction, stellar mass, morphology and star formation activity. Depending on the method for measuring the gradients and the radial region in which the gradient is measured, we get the following results.
Firstly, measuring the metallicity gradient from the entire radial profile. We find correlations between metallicity gradients and multiple global galaxy properties, which have correlation coefficients significantly different from zero. However, the correlation coefficients get closer to 0, when considering only massive galaxies (M > 10 10 M ). The correlations between metallicity gradients and global galaxy properties are tightest for metallicity gradients measured in units of dex r −1 eff,r . However, also when we are normalising the galactocentric radius with Petrosian r 90 , the isophotal radius R 25 or measuring the radius in kpc, we recover these correlations. The correlations are such that less massive, more H i-rich galaxies with smaller µ have steeper metallicity gradients than more massive, higher µ , more H i-poor galaxies.
Secondly, measuring the metallicity gradient from the radial profile without the central 0.5 r eff,r . In this case, we only recover a correlation coefficient significantly different from zero for log µ and metallicity gradient in units of dex r −1 eff,r . All other relations between metallicity gradients and global galaxy properties are either flat or the data are too scattered across the parameter space. This implies that the stellar mass surface density not only shapes the radial metallicity profile in the centre of galaxies but also the steepness of the metallicity decline towards the outskirts.
In both cases, a deeper analysis of inter-correlations between the global galaxy properties revealed that only log µ determines metallicity gradients. All other correlations appear to be driven by the relation between log µ and other global galaxy properties. M12 found that the concentration c (as a proxy for bulge to total mass ratio) is closer related to metallicity gradient than µ . To understand these differences between two studies that use the same underlying data set, we compared the distribution of c and log µ for the M12 and our sample (see Fig. 12).
While we use the same radially binned spectra for galaxies with stellar masses greater than 10 10 M as M12, in this paper, we use a different method to fit the gradients and we only use galaxies within ±1.5σ of the star formation main sequence as defined by Catinella et al. (2018). Generally, we recover the same trends as M12. As can be seen from the middle and right panel of Fig. 12, the correlations between the metallicity gradients and stellar mass surface density µ or the concentration index c are different for our work than for M12. Where the removal of quiescent galaxies emphasises a correlation between metallicity gradients and µ , the same step wipes out the correlation between metallicity gradient and c. Thus, selecting only SFMS galaxies, as we did, preferentially removes galaxies with flat or positive gradients and large concentration indices and galaxies with all types of gradients and large stellar mass surface densities from the M12 sample. Thus slightly different trends are induced. Overall, the results from M12 and our results agree in the sense that 'more bulgedominated' galaxies have flatter radial metallicity profiles than 'more disc-dominated' galaxies. This is in contrast to results based on CALIFA data, which did not find any correlation between the metallicity gradient and Hubble type (Sánchez et al. 2014;Sánchez-Menguiano et al. 2016).
Overall, we observe that the steepness of metallicity gradients and the shape of radial metallicity profiles are driven by the stellar mass surface density. We also see correlations with stellar mass but our statistical tests suggest that stellar mass surface density is a more important driver. To understand what this finding implies for galaxy evolution, we consider two chemo-dynamical models by Pezzulli & Fraternali (2016) and Boissier & Prantzos (2000). Pezzulli & Fraternali (2016) consider models with growing exponential stellar disks. They find that in models, in which gas accretes from the intergalactic medium (IGM) such that the disk grows with a constant exponential scale length, galaxies form metallicity gradients that are not compatible with observations. Once adding radial flows, gradients become more realistic. When considering IGM gas accretion plus radial flows plus inside-out growth, realistic gradients are formed and less IGM accretion is needed than in the previous case. Overall, the steepness of their metallicity gradients is driven by the angular momentum misalignment of accreted gas with respect to the disc. The more miss-aligned the accreted gas, the larger radial gas flows, the steeper metallicity gradients. In the context of our observational findings, these models suggest that galaxies with smaller µ would have larger radial flows, as indicated by their steeper metallicity gradients. Galaxies with larger µ have smaller radial flows, which would mean that less and less gas arrives at their centres. Once these galaxies use up the gas in their centres, inside-out quenching would set in. Shortly afterwards, these galaxies would reach equilibrium in their centres. In our observations, this equilibrium state is reflected in the flattening of the radial metallicity profiles towards galaxy centres. Such a saturation effect has also been suggested in such works as Köppen & Edmunds (1999).
The chemo-dynamical models of Boissier & Prantzos (2000) investigate the galaxy evolution as a function of halo spin parameter λ and rotation velocity, which is a proxy for mass. These models rely purely on IGM accretion and inside-out growth of an exponential disk. No radial flows are implemented. The central surface brightness in their model galaxies is determined by the halo spin parameter, such that galaxies with smaller central surface brightness tend to reside in haloes with larger spins. This is also found in other simulations and models (e. g. Kim & Lee 2013). Quantitatively their metallicity gradients are steeper than commonly measured. However, qualitatively their Fig. 15 shows that their model galaxies form steeper gradients the higher the halo spin parameter, and thus the lower the central surface brightness. Furthermore, galaxies with very low halo spin and thus high central surface brightness appear to form a metallicity plateau in their centres. These results agree with our observations. Once more the flattening of the radial metallicity profile in the centre can be explained with different accretion patterns in low and higher µ galaxies. The IGM accretion onto more massive galaxies with higher total surface density is higher in the beginning but shuts down faster than for less massive and less dense galaxies (their Fig. 3). With the decrease in gas supply, once more metallicity converges towards an equilibrium value, as can be seen in the centres of our high µ galaxies.
The comparison to two chemo-dynamical models suggests that our observational finding of steeper metallicity gradients in galaxies with lower µ can be explained. Our recovered relation can either be interpreted as the impact of (i) the halo spin parameter on the inside-out growth of exponential disks or (ii) smaller radial velocities in galaxies of earlier type.
The correlation between metallicity gradient and stellar mass has often been discussed in the literature. The CALIFA team (Sánchez et al. 2014;Sánchez-Menguiano et al. 2016) as well as Kudritzki et al. (2015) and Ho et al. (2015) find a universal metallicity gradient, that is, no correlation with stellar mass or morphology. On the other hand, in the MaNGA data, Belfiore et al. (2017b) find the steepest declining metallicity profiles, that is, the steepest metallicity gradients for galaxies around stellar masses of 10 10 to 10 10.5 M and flatter metallicity profiles in lower and higher mass galaxies. We recover these trends in the MaNGA data, which we use for comparison with our sample (middle column Fig. 8). All these studies measure the gradients from radial metallicity profiles within the radial range of 0.5 ≤ r eff,r ≤ 2.0. When measuring metallicity gradients for our sample from the radial profiles without the central 0.5 r eff,r , we recover a relatively flat correlation with large scatter (see in particular middle column in Fig. 7) and thus agree with previous studies. Interestingly, Bresolin (2019) have studied metallicity gradients in low-mass spirals with longslit spectroscopy and also found relatively steep metallicity gradients. These are consistent with or steeper than our measurements (see e. g. their Fig. 8). Poetrodjojo et al. (2018) measured metallicity gradients for a small number of SAMI galaxies using the entire radial metallicity profile and find that low-mass galaxies have flatter metallicity gradients than more massive galaxies. We note, however, that their upper stellar mass limit is 10 10.5 M . They furthermore caution that the stellar mass distribution of the sample heavily impacts on the observed trends between metallicity gradients and stellar mass. In addition, diffuse ionised gas might pose a problem (Poetrodjojo et al. 2019). The lower stellar mass limit of the galaxy sample investigated by M12 is at 10 10 M and metallicity gradients were also measured on the entire radial metallicity profiles. They observed more massive galaxies to have flatter metallicity gradients. These two results are not mutually exclusive and given that Belfiore et al. (2017b) observe a change of trend around the stellar mass limits of the Poetrodjojo et al. (2018) and M12 stellar mass limits, the two results might even be complementary. In this case, we would expect to observe this turnover in our results. For our sample, however, the stellar mass range 9.0 < log M [M ] < 10.0 is more sparsely sampled than higher stellar masses and we only measured average gradients in one stellar mass bin in this stellar mass range (see in particular middle column, second row from top in Fig. 6). Thus, a turnover can not be robustly recovered from our data. Nonetheless, our analysis shows that in addition to the stellar mass distribution of the sample (as observed by Poetrodjojo et al. 2018), the radial location where the metallicity gradient is measured affects results regarding the correlation between gradients and global galaxy properties.
Until today, there have only been few studies, aside from that of M12, investigating the link between metallicity and H i content. Brown et al. (2018), Bothwell et al. (2013), and Hughes et al. (2013) find that larger H i content leads to lower (central) metallicities, Bothwell et al. (2016) reported that molecular gas is more relevant in determining the metallicity. With respect to gradients (rather than central metallicities as in Bothwell et al. 2016), we find that H i is more tightly correlated to metallicity than H 2 . Carton et al. (2015) investigated metallicity gradients in a sample of massive galaxies and they find, in contrast to our results, that more H i-rich galaxies have flatter gradients. Their sample, however, covers a smaller stellar mass range than our sample, doesn't reach as high H i mass fractions as our sample and they use a different metallicity estimator. Hence, the comparison is difficult. Nonetheless, we only observe the same trends as Carton et al. (2015) when considering our analysis of the MaNGA sample: higher H i mass fractions come with flatter metallicity gradients.
Overall, we find that log µ determines metallicity gradients in our sample of SFMS galaxies, which reflects predictions from the chemo-dynamical evolution models by Pezzulli & Fraternali (2016) and Boissier & Prantzos (2000). Correlations with stellar mass and H i mass fraction are less robust and a more detailed analysis suggests that these trends are induced due to correlations between log µ and stellar mass as well as log f HI .
Local metallicity and global HI mass fraction in local closed-box models
Based on the previous findings of M12, we investigated the correlation between local metallicity and global H i mass fraction.
Here, we consider local metallicities measured in the vicinity of either r eff,r (for our sample and for a sample of MaNGA galaxies with H i mass) or r 90,r (only for our sample). In both cases, we find a correlation between the local metallicity and the global H i to stellar mass ratio. When comparing the observed correlation to the relation expected for a local closed-box model utilising a the true stellar yield, we find that metallicities, as expected, are significantly overestimated. When using an effective yield, A39, page 14 of 26 which accounts for in-and outflows and turns the model in a gas regulator model, we find that this model is in better agreement with the data. The detailed choices are discussed below. Simulations (Forbes et al. 2014) have shown that these radial gas flows are vital for the evolution of galaxies but they are in equilibrium around a redshift of 0. Observations (Schmidt et al. 2016) of radial flows, which bring metal-poor gas towards the centres of galaxies, in the H i kinematics, show that they exist but are not detected in every galaxy, mostly likely because they are small. Also, the Pezzulli & Fraternali (2016) model suggests that small radial flows are necessary but not the main driver of metallicity gradients. To compare the model to the data, we have to make assumptions for the (effective) yield and the ratio of H i to stellar disc size. The ratio of H i to stellar disc size has not yet been studied extensively. Broeils & Rhee (1997) find a remarkably tight correlation between H i disc size and 25 mag arcsec −2 isophotal radius R 25 for spiral galaxies, with the average radius ratio being 1.7. However, galaxies with higher µ contain less H i and, thus, the ratio between H i and stellar disc size likely decreases. An extensive analysis by Wang et al. (2016) find a range of radius ratios: 0.6 r HI /R 25 5. Thus, we also show the model results with an H i to 25 mag arcsec −2 isophotal radius ratio of 1.0. For the yield, we chose two different values: 0.00268, an effective yield obtained by Pilyugin et al. (2004) for spiral galaxies, and 0.037, a stellar yield obtained by Vincenzo et al. (2016) from the Romano et al. (2010) and Nomoto et al. (2013) stellar models assuming a Chabrier (2003) initial mass function and the average gas phase metallicities of our galaxies. Being a measure of the true stellar yield, the prediction based on the Vincenzo et al. (2016) yield is an upper limit. Thus, indeed outflows of metalrich gas or inflow of metal-poor gas must have taken place in our sample galaxies. The Pilyugin et al. (2004) appears at the lower end of our data, which might imply that in-and outflows in our sample galaxies is less effective or pronounced than in the spiral galaxies analysed by Pilyugin et al. (2004). In addition the differing metallicity estimators between our work and Pilyugin et al. (2004) might induce differences (Vincenzo et al. 2016). Overall, this model works well to explain the correlation between a local metallicity measurement and the global H i-to-stellar-mass ratio. Recent large surveys of the H i fraction and its correlation to other global properties of galaxies suggest that the morphology (as described by the stellar mass surface density µ ) is one defining factor (secondary to NUV-r colour) in setting the H i mass fraction (Catinella et al. 2013Brown et al. 2015).
Together with the analysis of the primary driver of metallicity gradient, this might explain why log f HI correlates with metallicity gradients. Another approach might be provided by our simple calculations in Sect. 5, which show that the global H i mass fraction sets the local metallicity at specified radii (here r eff and r 90 ).
Once the global H i mass fraction determines the metallicity at, for example, r eff and r 90 , f HI also determines the rate at which the metallicity changes from r eff to r 90 and, thus, the metallicity gradients. In this way, our simple model could also explain why the metallicity gradient seems to correlate with H i mass fraction. Barrera-Ballesteros et al. (2018) did not look at the correlation between metallicity gradients or local metallicity and global H i content but the authors did offer their report that local metallicity depends on local cold gas mass fractions (estimates based on the optical extinction A V ). In particular, they found lower metallicities in regions where the ratio of local gas to local total mass is high. As we assume constant H i column density across a exponentially declining stellar disc, our model also suggests lower metallicities where the H i to stellar surface density is higher. Thus, both our simple model and our data agree with the findings by Barrera-Ballesteros et al. (2018). We are furthermore able to specify that H i is more important than H 2 in defining the metallicity. In light of these results, it will be interesting to follow up on these investigations once resolved H i and metallicity observations are available for a large number of galaxies, in particular, through combinations of surveys such as MaNGA and Apertif (Adams et al., in prep.) 5 or WALLABY (Koribalski et al. 2020).
Conclusion
In this work, we present new optical longslit spectra for 27 low-mass galaxies from the xGASS ) and xCOLD GASS surveys (Saintonge et al. 2017). By combining the new data with data from xGASS and xCOLD GASS, we investigated the relation between gas-phase oxygen abundance, gas content, and star formation. In particular, we focused on metallicity gradients and the local metallicity at different galactocentric radii and their correlation to global galaxy properties. Our findings can be summarised as follows: -While there is a number of global galaxy properties that correlate with the metallicity gradient, various statistical analyses suggest that only the stellar mass surface density µ drives metallicity gradients. Other correlations come about as log µ correlates with these global galaxy properties. -The correlation between µ and metallicity gradient can be interpreted with the help of chemo-dynamical evolution models of Pezzulli & Fraternali (2016) and Boissier & Prantzos (2000): The observed correlation can be interpreted as a sign of (i) different spin parameters of the host halo or (ii) different accretion and radial flow patterns in galaxies, depending on their stellar mass surface density.
-The local metallicity is correlated with the global H i mass fraction. Although it is surprising that a local measurement should be informed about global galaxy properties, this correlation can actually be modelled with a simple gas regulator model, which is described by a local closed-box model plus an effective yield, which accounts for small radial flows. -When comparing to metallicity gradients in the literature, in particular MaNGA (Bundy et al. 2015;Belfiore et al. 2017b;Sanchez et al. 2018;Sánchez et al. 2016a,b) and SAMI (Croom et al. 2012;Poetrodjojo et al. 2018), we find that our results agree within the errors for high-mass galaxies. In the lower stellar mass regime we observe relatively steep gradients. These discrepancies can not be explained by sample selection but potentially by small sample statistics. We expect further discussions in the literature over trends with metallicity gradients for galaxies at stellar masses, M ≤ 10 10 M , or with low stellar mass surface densities (small to no bulges). Furthermore, it is vital that metallicity gradients are measured from metallicities at similar radial regions. Once data points inwards of 0.5 r eff,r are included in the gradient measurement, which was not done by the MaNGA team, our results start to differ significantly. In particular, the (local) correlation between metallicity and H i has not yet been studied in great detail across galaxy discs.
Upcoming and ongoing surveys such as MaNGA, SAMI, WAL-LABY, and Apertif, as well as future surveys on MeerKAT will provide more information and further details about local ISM enrichment and radial gas flows. O3N2 method) is shown as a function of radius. On the main x-axis, the radius is normalised by r eff,r , additionally we also give the x-axis in units of kpc, normalised by r 90,r , r 50,r and R 25,r for orientation. Filled circles mark all metallicity measurements that meet our quality criteria (see Sect. 3.3) and open circles that measurements that do not meet our criteria. The red dashed line shows the linear fit used to measure the metallicity gradient from the full metallicity profile and the orange dotted line the linear fit used to measure the metallicity gradient without the central 0.5 r eff,r . The yellow shaded area, marks the radial regions between 0.5 and 2.0 r eff,r , where e. g. publications based on CALIFA and MaNGA data products perform their metallicity gradient measurements. The text in the top right corner gives the GASS ID of the galaxy and the stellar mass. | 18,048.8 | 2021-02-19T00:00:00.000 | [
"Physics"
] |
EURASIP Journal on Applied Signal Processing 2002:9, 944–953 c ○ 2002 Hindawi Publishing Corporation Frequency Spectrum Based Low-Area Low-Power Parallel FIR Filter Design
Parallel (or block) FIR digital filters can be used either for high-speed or low-power (with reduced supply voltage) applications. Traditional parallel filter implementations cause linear increase in the hardware cost with respect to the block size. Recently, an efficient parallel FIR filter implementation technique requiring a less-than linear increase in the hardware cost was proposed. This paper makes two contributions. First, the filter spectrum characteristics are exploited to select the best fast filter structures. Second, a novel block filter quantization algorithm is introduced. Using filter benchmarks, it is shown that the use of the appropriate fast FIR filter structures and the proposed quantization scheme can result in reduction in the number of binary adders up to 20%.
INTRODUCTION
Finite impulse response (FIR) filters are widely used in various DSP applications. In some applications, the FIR filter circuit must be able to operate at high sample rates, while in other applications, the FIR filter circuit must be a low-power circuit operating at moderate sample rates. The low-power or low-area techniques developed specifically for digital filters can be found in [1,2,3,4,5,6,7].
Parallel (or block) processing can be applied to digital FIR filters to either increase the effective throughput or reduce the power consumption of the original filter. While sequential FIR filter implementation has been given extensive consideration, very little work has been done that deals directly with reducing the hardware complexity or power consumption of parallel FIR filters.
Traditionally, the application of parallel processing to an FIR filter involves the replication of the hardware units that exist in the original filter. If the area required by the original circuit is A, then the L-parallel circuit requires an area of L×A. Recently, an efficient parallel FIR filter implementation technique requiring a less-than linear increase in the hardware cost was proposed using FFAs (fast FIR Algorithms) [8].
In [9], it was shown that the power consumption of arithmetic units can be reduced if statistical properties of the input signals are exploited. In this paper, based on [10], it is shown that the hardware cost can be reduced by exploiting the frequency spectrum characteristics of the given transfer function. This is achieved by selecting appropriate FFA structures out of many possible FFA structures all of whom have similar hardware complexity at the word-level. However, their complexity can differ significantly at the bit-level. For example, in narrowband low-pass filters, the signs of consecutive unit sample response values do not change much and therefore their difference can require fewer number of bits than their sum. This favors the use of a parallel structure which requires subfilters which require difference of consecutive unit sample response values as opposed to sum.
In addition to the appropriate selection of FFA structures, proper quantization of subfilters is important for low-power or low hardware cost implementation of parallel FIR filters. It is shown in [5,6,7] that if the filter coefficients are first scaled before the quantization process is performed, the resulting filter will have much better frequency-space characteristics. When the quantized filter is implemented, a postprocessing scale factor (PPSF) is used to properly adjust the magnitude of the filter output. In cases where large levels of parallelism are used, the number of required subfilters is large, and consequently the PPSFs can contribute to a significant amount of hardware overhead. In [8], PPSFs are restricted to a set of simple values to reduce the hardware overhead due to PPSFs. Since the original PPSF is replaced with the new simple PPSF that is the nearest in value, the quantized filter coefficients must also be properly modified. However, this approach is not guaranteed to give optimal quantized coefficients since already quantized coefficients are modified again. To avoid this problem, we propose look-ahead maximum absolute difference (LMAD) quantization algorithm, which gives optimal quantized coefficients for a given simple PPSF value.
In Section 2, FFAs are briefly reviewed. Also, frequency spectrum related hardware complexities for different types of FFAs are discussed. Section 3 presents a quantization method suitable for block FIR filters. Section 4 presents several block filter design examples.
FAST FIR ALGORITHMS
Consider the general formulation of a length-N FIR filter, where {x i } is an infinite length input sequence and {h i } are the length-N FIR filter coefficients. Then the polyphase representation of a traditional L-parallel FIR filter [11] can be expressed as where This block FIR filtering equation shows that the parallel FIR filter can be realized using L 2 -FIR filters of length N/L. This linear complexity can be reduced using various FFA structures.
which implies that Direct implementation of (4) is shown in Figure 1. This structure computes a block of 2 outputs using 4 length N/2 FIR filters and 2 postprocessing additions, which requires 2N multipliers and 2N − 2 adders. If (4) is written in a different form, the (2×2) FFA0 (FFAtype 0) is obtained, where H i+ j = H i + H j and X i+ j = X i + X j . Implementation of (5) is shown in Figure 2. This structure computes a block of Figure 1: Traditional 2-parallel FIR filter.
y(2k) Figure 2: 2-parallel FIR filter using FFA0. 2 outputs using 3 length N/2 FIR filters and 4 preprocessing and postprocessing additions, which requires 3N/2 multipliers and 3(N/2 − 1) + 4 adders. By a simple modification of (5), the following FFA1 (FFA-type 1) is derived [11], In (6), H 0−1 = H 0 − H 1 and X 0−1 = X 0 − X 1 . The structure derived by FFA1 is shown in Figure 3. The structures derived by FFA0 and FFA1 are essentially the same except some sign changes. Notice that, in FFA1, H 0−1 is used instead of H 0+1 . When an FIR filter is implemented using a multiplierless approach, the hardware complexity is directly proportional to the number of nonzero bits in the filter coefficients. If the signs of the given impulse response sequences do not change frequently as in the narrowband low-pass filter cases, the coefficient magnitudes of H 0 + H 1 are likely to be larger than those of H 0 − H 1 . Then, H 0 + H 1 has more nonzero bits in the coefficients than H 0 − H 1 . (See examples in Section 4.) If the signs of the given impulse response sequences change frequently as in the wide-band low-pass filter cases, H 0 − H 1 is likely to have more nonzero bits in the coefficients than H 0 + H 1 . Thus, to achieve minimum hardware cost, it is necessary to select either FFA0 or FFA1 depending upon the frequency spectrum specifications.
Cascading FFAs
The (2 × 2) and (3 × 3) FFAs can be cascaded together to achieve higher levels of parallelism. The cascading of FFAs is a straightforward extension of the original FFA application [8]. For example, an (m × m) FFA can be cascaded with an (n×n) FFA to produce an (m×n)-parallel filtering structure. The set of FIR filters that result from the application of the (m × m) FFA are further decomposed, one at a time, by the application of the (n × n) FFA. The resulting set of filters will be of length N/(m × n).
For example, the (4 × 4) FFA can be obtained by first applying the (2 × 2) FFA0 to (2) and then applying the (2 × 2) FFA0 or the (2 × 2) FFA1 to each of the filtering operations that result from the first application of the FFA0. The resulting (4 × 4) FFA structure is shown in Figure 7. Each filter block F 0 , F 0 +F 1 , and F 1 represents a (2×2) FFA structure and can be replaced separately by either (2 × 2) FFA0 or (2 × 2) FFA1. Each filter block F 0 , F 0 + F 1 , and F 1 is composed of three subfilters as follows: When the filter block F 0 + F 1 is implemented using FFA1 structure, the subfilters are H 0+1 , H 2+3 , and H 0+1 − H 2+3 . Thus, even though FFA1 structure is used for slowly varying impulse response sequences, optimum performance is not guaranteed. In this case, better performance can be obtained by using the FFA1 shown in Figure 8. Since the subfilters in FFA1 are H 0−1 , H 2−3 , and H 0−1 − H 2−3 , the FFA1 gives smaller number of nonzero bits than FFA1 for the case of slowly varying impulse response sequences. Notice that the FFA1 structure can be derived by first applying the (2 × 2) FFA1 (instead of the (2 × 2) FFA0) to (2). When the filter block F 0 + F 1 in Figure 7 is replaced by FFA1 in Figure 8, it can be shown that the outputs are y(4k), −y(4k + 1), y(4k + 2), and −y(4k + 3).
Selection of FFA types
For given length N unit sample response values {h i } and block size L, the selection of best FFA type can be roughly determined by comparing the signs of the values in subfilters For example, in the case of L = 2 and even N, H 0 , and H 1 are Figure 4: 3-parallel FIR filter using FFA0. Figure 5: 3-parallel FIR filter using FFA1.
LOOK-AHEAD MAD QUANTIZATION
It is shown in [5,6,7] that if the filter coefficients are first scaled before the quantization process is performed, the resulting filter will have much better frequency-space characteristics. The NUS algorithm [6] employs a scalable quantization process. To begin the process, the ideal filter is normalized so that the largest coefficient has an absolute value of 1. The normalized ideal filter is then multiplied by a variable scale factor (VSF). The VSF steps through the range of numbers from 0.4375 to 1.13 with a step size of 2 −W , where W is the coefficient word length. Signed power-of-two (SPT) terms are then allocated to the quantized filter coefficient that represents the largest absolute difference between the scaled ideal filter and the quantized filter. The NUS algorithm iteratively allocates SPT terms until the desired number of SPT terms is allocated or until the desired NPR, normalized peak ripple, specification is met. Once the allocation of terms stops, the NPR is calculated. The process is then repeated for a new scale factor. The quantized filter leading to the minimum NPR is chosen.
In parallel FIR filters, the NPR cannot be used as a selection criteria for choosing the best quantized filter since passband/stopband ripples cannot be defined for the set of subfilters obtained by the application of FFAs. In [8], it is shown that the maximum absolute difference (MAD) between the y(3k + 2) Figure 6: 3-parallel FIR filter using FFA2.
frequency responses of the ideal filter and the quantized filter can be used as an efficient selection criteria for parallel filters.
When the quantized filter is implemented, a postprocessing scale factor (PPSF) is used to properly adjust the magnitude of the filter output. The PPSF is calculated as In the cases where large levels of parallelism are used, the PPSFs can contribute to a significant amount of hardware overhead. In [8], to reduce this hardware overhead the PPSFs are restricted to the following set of values: {0 in value. This is accomplished using the following three steps: (i) determine effective coefficients, effective coeffs. = quantized coeffs. × PPSF; (ii) determine shifted coefficients with new PPSF, shifted coeffs. = effective coeffs./new PPSF; (iii) quantize the shifted coefficients. However, the above steps are not guaranteed to give optimal quantized coefficients for the new PPSF value. The reason is that the quantization in (iii) is performed on the already quantized coefficients.
To avoid this problem, LMAD quantization algorithm is proposed. In the proposed algorithm, the PPSF for a given VSF is computed by (13) before the quantization step begins. If the number of nonzero bits in the computed PPSF is less than a prespecified value, then the normalized coefficients are scaled by the VSF and the scaled coefficients are quantized. Otherwise, the procedure is repeated for the next VSF value.
In [8], the number of nonzero bits in PPSF is fixed. However, in the proposed approach, the number of nonzero bits in PPSF can be varied and the PPSF value giving the best performance can be selected. From our simulation experience, increasing the number of nonzero bits in PPSF more than three does not improve the numerical performance significantly. Notice that the MAD value by the proposed method is only 45% of the MAD value in [8]. Frequency responses are compared in Figure 9. Table 1 shows that, for the two low-pass FIR filter examples in [8], the proposed method can save up to 24% of adders. In [8], only FFA type 0 is used for each value of L. However, as can be seen from Table 1, better results are obtained by selecting FFA type(s) properly for each L.
To compare the hardware savings by the quantization and the proper selection of FFA types, only H 0+1 or H 0−1 subfilters are considered. From Table 2, the number of nonzero bits for H 0+1 of nonscaled FFA0 filter is 14 while the number of nonzero bits for H 0+1 of scaled FFA0 filter is 10 (including PPSF). Thus, in addition to the word-length reduction, hardware saving of about 28% can be obtained by LMAD scaling.
From Table 4, the number of nonzero bits for H 0−1 of scaled FFA1 filter is 7 (including PPSF). Thus, 22% further saving is obtained by the selection of proper filter type. Thus, in this example, about half of the saving is due to the LMAD quantization and the other half is due to proper filter type selection.
DESIGN EXAMPLES
In this section, three design examples with various frequency specifications are given.
Example 3. Consider a narrowband low-pass filter with filter order = 35, passband edge = 0.2π, maximum passband ripple = 0.185 dB, stopband edge = 0.3π, and minimum stopband attenuation = 33.5 dB. As can be seen from Figure 11, the signs of the impulse response sequences (designed by the Remez exchange algorithm) change slowly.
For L = 2, according to the discussions in Section 2.4, the number of pairs with the same signs is 16, while the number of pairs with the opposite signs is only 2. Thus, FFA1 is more efficient than FFA0. By the LMAD quantization algorithm, the number of nonzero bits required for H 0+1 is 42 but the number of nonzero bits required for H 0−1 is 24. Thus the hardware cost of H 0−1 is about 57% of the hardware cost of H 0+1 . The frequency responses for L = 2 are compared in Figure 12. the same signs is 2. Thus, FFA0 is the most efficient for F 0 . The number of pairs with the opposite signs in the subfilter pair {H 1 , H 3 } is 7 while the number of pairs with the same signs is 2. Thus, FFA0 is the most efficient for F 1 . By a similar procedure, it can be shown that FFA1 is the most efficient choice for F 0 + F 1 .
The design results for L = 2, 3, and 4 are summarized in Table 5. For L = 2 and L = 3, about 20% of the hardware can be saved by a proper choice of FFA types. However, for L = 4, only 7% of the hardware saving can be achieved by a proper choice of FFA types. The main reason is that the correlation of filter coefficients between subfilters is reduced as the block size increases. Table 6. For L = 2 and L = 3, about 12%-15% of the hardware can be saved by a proper choice of FFA types. For L = 4, 4% of the hardware saving can be achieved by a proper choice of FFA types.
CONCLUSIONS
It has been shown that the hardware cost and power consumption of parallel FIR filters can be reduced significantly by exploiting the frequency spectrum characteristics. For example, in narrowband low-pass filters, the signs of consecutive unit sample response values do not change much and therefore their difference (FFA1) can require fewer number of bits than their sum (FFA0). In wideband low-pass filters, the signs of consecutive unit sample response values change frequently and therefore their sum (FFA0) can require fewer number of bits than their difference (FFA1). To determine the best FFA type for given impulse response sequence and block size L, a sign-comparing procedure was proposed. The usefulness of the proposed sign-comparing procedure was demonstrated by several examples. Also, the proposed lookahead MAD quantization algorithm was shown to be very efficient for the implementation of parallel FIR filters. Substructure sharing is the process of examining the hardware implementation of the filter coefficients and sharing the hardware units that are common among the filter coefficients. Using the substructure sharing techniques in [8], further savings in hardware cost and power consumption can be achieved.
Developing a similar approach to power reduction of adaptive FIR filters will be an interesting future research. Further research needs to be directed towards finite word-length analysis of these low-power parallel FIR filters. | 4,076.8 | 2002-01-01T00:00:00.000 | [
"Computer Science"
] |
Fermi Large Area Telescope Detection of Gamma Rays from the NGC 6251 Radio Lobe
We report the detection of extended γ-ray emission from lobes in the radio galaxy NGC 6251 using observation data from the Fermi Large Area Telescope. The maximum likelihood analysis results show that a radio morphology template provides a better fit than a pointlike source description for the observational data at a confidence level of 8.1σ, and the contribution of lobes constitutes more than 50% of the total γ-ray flux. Furthermore, the γ-ray energy spectra show a significant disparity in shape between the core and lobe regions, with a curved log-parabola shape observed in the core region and a power-law form observed in the lobes. Neither the core region nor the northwest (NW) lobe displays significant flux variations in the long-term γ-ray light curves. The broadband spectral energy distributions of the core region and the NW lobe can be explained with a single-zone leptonic model. The γ-rays of the core region are due to the synchrotron-self-Compton process, while the γ-rays from the NW lobe are interpreted as inverse Compton emission of the cosmic microwave background.
Introduction
Radio galaxies (RGs) are a subclass of radio-loud (RL) active galactic nuclei (AGNs), characterized by the remarkable and complex large-scale jet morphology in the radio band.The largescale jet can extend beyond the host galaxy of RGs and form some substructures, such as knots, hotspots, and radio lobes, which have been resolved in the radio, optical, and X-ray bands (see Harris & Krawczynski 2006 for a review).So far, dozens of RGs have been detected by the Fermi Large Area Telescope (Fermi-LAT; Abdollahi et al. 2022;Ballet et al. 2023).Several RGs have even been detected at the TeV band and confirmed to be very high-energy emitters. 4Generally, the γ-rays of RGs are still considered to come from the core jet (Fukazawa et al. 2015;Xue et al. 2017).However, some works studying the X-ray emission of the large-scale jet substructures in RGs predicted the detectable γ-rays of some substructures (e.g., Zhang et al. 2009Zhang et al. , 2010Zhang et al. , 2018;;He et al. 2023).Especially, the detection of γ-rays from large-scale radio lobes of RGs Cen A (Abdo et al. 2010a;Sun et al. 2016) and Fornax A (Ackermann et al. 2016) confirms that the substructures of the jet are acceleration sites of high-energy particles.Hence, the origin of γ-ray emission for RGs is still debated.
NGC 6251 is a nearby (z = 0.0247, Wegner et al. 2003) giant RG with a radio angular scale extending to about 1°.2.Its radio morphology mainly consists of a radio core, a narrow straight jet with a roughly uniform cone angle, and two large-scale radio lobes (Waggett et al. 1977;Saunders et al. 1981).The jet starts within 1 pc of the nucleus, extends toward the northwest (NW) for approximately ¢ 4.5, and subsequently forms the NW lobe about 0°.3 away from the radio core (Saunders et al. 1981;Perley et al. 1984).The radio intensities of the jet and NW lobe are clearly higher than that of the southeast (SE) lobe, which is separated from the radio core by an angular distance of 0°.7 (Perley et al. 1984;Cantwell et al. 2020).The X-rays from the nucleus and its ~¢ 4.5 jet were initially observed by ROSAT (Birkinshaw & Worrall 1993).The diffuse X-ray emission originating from the NW lobe in NGC 6251 has been detected through the Suzaku X-ray imaging observations, which has been suggested to be generated by the inverse Compton scattering of cosmic microwave background (IC/CMB) process (Takeuchi et al. 2012).The X-ray emission from the straight jet region was also reported to potentially arise from the IC/CMB process (Sambruna et al. 2004), although an alternative explanation could be synchrotron radiation (Evans et al. 2005).Gamma-rays from NGC 6251 were likely first detected by the Compton Gamma-Ray Observatory in the MeV-GeV range.Using the X-ray images obtained from ROSAT and ASCA observations, Mukherjee et al. (2002) proposed that the Energetic Gamma-Ray Experiment Telescope source 3EG J1621+8203 is likely associated with NGC 6251.This association was further substantiated by Foschini et al. (2005) through an analysis of a wide field of view observation from INTEGRAL/IBIS.The GeV γ-ray emission of NGC 6251 has been reported in the first Fermi-LAT catalog (Abdo et al. 2010b), identified as a γ-ray emitting misaligned AGN (Abdo et al. 2010c), and subsequently included in the second/ third/fourth Fermi-LAT catalogs (Nolan et al. 2012;Acero et al. 2015;Abdollahi et al. 2022;Ballet et al. 2023).The observed γ-rays are conventionally believed to originate from the core-jet region near the black hole via the synchrotron-self-Compton (SSC) process (Chiaberge et al. 2003;Migliori et al. 2011;Xue et al. 2017).However, the 95% error ellipse of its associated Fermi-LAT source 2FGL J1629.4+8236does not encompass the radio core of NGC 6251 (Takeuchi et al. 2012).Additionally, the corresponding coordinates of this Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Fermi-LAT source in the latest fourth Fermi-LAT source catalog fall within the region of the jet in NGC 6251 at a distance of ~¢ 3 6 from the radio core (Foschini et al. 2022).These results make the origin of its γ-rays ambiguous.In this paper, we completely analyze the ∼15 yr Fermi-LAT observation data to investigate the origins of its γ-ray emission.The data analysis and results of Fermi-LAT are presented in Section 2. The spectral energy distributions (SEDs) of the NW lobe and core region are constructed and modeled in Section 3. A discussion and conclusions are given in Section 4. Throughout, H 0 = 71 km s −1 Mpc −1 , Ω m = 0.27, and Ω Λ = 0.73 are adopted in this paper.
Baseline Model of Data Analysis
NGC 6251 has been reported to be associated with a γ-ray source 4FGL 1630.6+8234 in the Fermi-LAT 12 yr Source Catalog (4FGL-DR3, Abdollahi et al. 2022). 5The Pass 8 data covering 2008 August 4 to 2023 March 1 (MJD 54682-60004) within the energy range of 0.1-300 GeV were downloaded from the Fermi Science Support Center. 6We select data within the 15°region of interest (ROI) centered on the radio position of NGC 6251 (R.A. = 248°.133, decl.= 82°.538).The publicly available software fermitools (v2.2.0, Fermi Science Support Development Team 2019) and the binned likelihood analysis method are used for our analysis.We use event class "SOURCE" (evclass=128) and event type "FRONT+BACK" (evtype=3) for the data analysis based on LAT data selection recommendations.To eliminate the contamination of γ-rays from the Earth's limb, the maximum zenith angle of 90°is set.A standard filter expression "(DATA_QUAL>0)&&(LAT_-CONFIG==1)" and the instrument response function of P8R3_SOURCE_V3 are used.All sources included in 4FGL-DR3 within ROI are added to the model.The spectral parameters of the sources encompassed in the circle centered on 4FGL J1630.6+8234 with a radius of 6°are left free, whereas the parameters of those sources lying beyond 6°are fixed to their 4FGL-DR3 values.The background models, including isotropic emission ("iso_P8R3_SOURCE_V3_V1. txt") and the diffuse Galactic interstellar emission ("gll_iem_v07.fits"), are considered and only their normalization parameters are kept free.
We use the maximum test statistic (TS) to evaluate the significance of the γ-ray signals of a source in the background, ) , where src and null are the likelihood values for the background with and without a source.If TS 25, it is considered that there is a new source (Abdo et al. 2010b).By generating a 5°× 5°residual TS map centered on the radio core of NGC 6251 for new source search, the maximum TS value of ∼10 is obtained, indicating that no new sources appear in the background.Generally, the power-law (PL) or log-parabola (LP) functions are used to fit the spectrum of sources, i.e., = , where N(E) is the photon distribution as a function of energy, Γ γ is the photon spectral index, E b is the scale parameter of photon energy, and β is the curvature parameter (Massaro et al. 2004).Following the methodology outlined in the second Fermi-LAT catalog (Nolan et al. 2012) ( ) to estimate the curvature significance of an energy spectrum.If TS curve 16 (corresponding to 4σ), it indicates that the spectrum exhibits significant curvature, and therefore, we use the LP function to describe it.
Pointlike Source Analysis
In 4FGL-DR4 (Abdollahi et al. 2022;Ballet et al. 2023), 4FGL J1630.6+8234 is identified as a pointlike source (PS) with an LP spectrum.Reanalyzing its ∼15 yr Fermi-LAT observation data, the TS map also shows it as a PS with TS ∼ 2619.34.Its time-integrated spectrum of the ∼15 yr Fermi-LAT observations within the energy band of 0.1-300 GeV is well explained by an LP spectral form with Γ γ = 2.36 ± 0.03 and β = 0.04 ± 0.02, as depicted in Figure 1 and shown in Table 1.The ∼15 yr average flux is F 0.1-300 GeV = (13.40± 0.51) × 10 −12 erg cm −2 s −1 .These results are roughly consistent with that of Γ γ = 2.27 ± 0.04, β = 0.10 ± 0.02, and F 0.1-100 GeV = (13.10± 0.56) × 10 −12 erg cm −2 s −1 in 4FGL-DR4.The long-term light curve of 4FGL J1630.6+8234 is generated using the adaptive-binning method (Lott et al. 2012) to ensure TS 9 for each time bin, where the minimum bin size is 30 days.The ∼15 yr Fermi-LAT light curve of 4FGL J1630.6+8234 in the 0.1-300 GeV band demonstrates a steady state without any significant flux variation, as displayed in Figure 1.If TS < 4, an upper limit is presented for that energy bin.The light curve (in the right panel) is obtained with an adaptive-binning method based on a criterion of TS 9 for each time bin, where the minimum time bin step is 30 days.The horizontal red dashed line is the ∼15 yr average flux, i.e., F 0.1−300 GeV = (13.40± 0.51) × 10 −12 erg cm −2 s −1 .
Using the tool of gtfindsrc, we reestimate the best-fit position of 4FGL J1630.6+8234 and obtain (R.A. = 247°.752,decl.= 82°.558)with a 95% confidence error circle radius of 0°.015.This position is located in the NW region relative to the radio core of NGC 6251, as presented in Figure 2. Together with the steady γ-ray emission of 4FGL J1630.6+8234 in the past ∼15 yr indicates that the γ-ray emission of 4FGL J1630.6 +8234, at least a portion of γ-rays, is likely dominated by the radiation of the large-scale extended regions in NGC 6251.
Radio Morphology Template
In order to examine our speculation, we first generate the γray counts map of 4FGL J1630.6+8234 in the 0.1-300 GeV band, as depicted in Figure 2. The data reduction procedure is described in Appendix A.1.The radio contours of NGC 6251 at 609 MHz observed by the Westerbork Synthesis Radio Telescope (WSRT; Mack et al. 1997), the 95% error ellipses for the counterpart of NGC 6251 from 1FGL to 4FGL, and the 95% error circle of the best-fit position of this work are also presented in Figure 2. The confidence intervals of the best-fit position decrease with increasing LAT observation time, but the radio core of NGC 6251 is never encompassed.The distribution of γ-ray photons on the NW lobe of NGC 6251 is significantly prominent, while the brightness of the SE lobe appears relatively subdued, rendering it indistinguishable in the map.As shown in Figure 5 in Appendix B, the spatial distributions of γ-ray photons exhibit significant variations across the low-energy to high-energy bands, shifting from a diffusion distribution dominated by the large-scale extended regions to a PS dominated by the radio core.The γ-ray photons above 10 GeV are primarily concentrated in close proximity to the radio core region.
Assuming that γ-rays are produced through the IC process by the same group of electrons responsible for generating radio radiation, the distribution of γ-rays can be considered a form of radio radiation (Ackermann et al. 2016).Therefore, using a Radio Morphology Template (RMT) model, we reanalyze the ∼15 yr Fermi-LAT observation data of 4FGL J1630.6+8234.We employ RMT to partition the observation area into three zones, namely, the core region and two giant lobes.The regions of two giant lobes are selected with the 609 MHz radio image from the NASA/IPAC Extragalactic Database (2019).Of particular note, the overlap region between the radio green contours and the red circle (Figure 5 in Appendix B) is subtracted and not included for the NW lobe template in the RMT model.The red circle is centered on the radio core with a radius of ¢ 4.5, which encompasses the radio core, a bright straight jet extending to the NW from the radio core, and the faint counterjet.The two lobes are taken as extended radiation sources while the core region is set as a PS model centered around the radio core of NGC 6251.We use the RMT model instead of the PS 4FGL J1630.6+8234 in the XML file and reperform the maximum likelihood analysis.
Through the RMT model analysis, we obtain the spectra and long-term light curves of lobes and core region, as shown in Figure 3.The spectrum of the core region still requires an LP function to fit with Γ γ = 1.80 ± 0.13 and β = 0.28 ± 0.06, which is more curved and harder than the total integrated spectrum of 4FGL J1630.6+8234(Figure 1).The ∼15 yr average flux of the core region in the 0.1-300 GeV band is F 0.1−300 GeV = (4.86 ± 0.61) × 10 −12 erg cm −2 s −1 with TS ∼508.55.The spectra of two lobes can be well explained with a PL form.We obtain Γ γ = 2.54 ± 0.06 and an average flux of F 0.1−300 GeV = (7.67 ± 0.76) × 10 −12 erg cm −2 s −1 with TS ∼600.31 for the NW lobe.The SE lobe displays a very steep spectrum in the Fermi-LAT energy band with Γ γ = 7.01 ± 0.19.The average flux F 0.1−300 GeV = (0.96 ± 0.23) × 10 −12 erg cm −2 s −1 and the TS value 30.96 are far lower than that of the core region and NW lobe.The results are shown in Table 1.Almost no emission above 1 GeV can be inspected in the SE lobe, which is consistent with the γ-ray counts maps as shown in Figure 5 in Appendix B. The long-term light curves of the core region and NW lobe under the case of the RMT model are simultaneously produced by fixing the parameters of the SE lobe and other sources.The adaptive-binning method is also used and a minimum time bin of 60 days is taken.Neither the NW lobe nor the core region shows significant flux variation of γ-rays, as illustrated in Figure 3 (see Appendix A.2).
PS Model versus RMT Model
Clearly, the TS value and flux of the giant NW lobe are higher than those of the core region.The γ-rays from lobes account for more than 50% of the total γ-ray flux of the source.We compare the likelihood values of the PS model with the RMT model and assess the significance by ), where the RMT model has four additional free parameters compared to the PS model.We They are all derived with the ∼15 yr Fermi-LAT observation data in the 0.1-300 GeV band.The solid red lines and gray dashed lines (in two left panels) depict the fitted spectra along with their respective 1σ uncertainties.If TS<4, an upper limit is presented for that energy bin.The light curves are derived with an adaptive-binning method based on a criterion of TS 9 for each time bin, where the minimum time bin step is 60 days.The data points of each time bin in the two light curves are simultaneously obtained.The horizontal red dashed lines represent the ∼15 yr average flux for each region, i.e., F 0.1−300 GeV = (4.86 ± 0.61) × 10 −12 erg cm −2 s −1 for the core region and F 0.1−300 GeV = (7.67 ± 0.76) × 10 −12 erg cm −2 s −1 for NW lobe.
obtain TS ext = 78.28,indicating that the RMT model is statistically favored over the PS model at a confidence level of 8.1σ.
As described above, the absence of significant variability in the overall long-term light curve of 4FGL J1630.6+8234(Figure 1) may be attributed to the dominance of lobe emission; however, no discernible flux variation is observed in the light curve of the core region either (Figure 3).In order to further study this issue, we derive the long-term light curve of NGC 6251 in the energy band of 10-300 GeV since the γ-rays above 10 GeV potentially are dominated by the emission of the core region.Nevertheless, only three detection points can even be obtained using the time bin of 720 days to analyze the observational data, and they do not show the obvious flux variation, as illustrated in Figure 6 in Appendix B.
Constructing and Modeling the SED
The distinct spectral patterns observed in the core region and NW lobe also suggest that γ-rays from 4FGL J1630.6+8234have separate origins.We construct the broadband SEDs for the NW lobe and core region, respectively, spanning from the radio to γ-ray bands, as depicted in Figure 4.For the core region, the radio and optical-UV data are collected from Migliori et al. (2011), X-rays are the combined spectrum using the latest 11 X-ray Telescope (XRT) observations (see Appendix B), and γ-rays are obtained by the analysis in this work.For the NW lobe, the radio and X-ray data are collected from Takeuchi et al. (2012) and Persic & Rephaeli (2019), while the γ-ray data are obtained by the analysis in this work.
We adopt a single-zone leptonic radiation model to fit the broadband SEDs of the NW lobe and core region.The model includes the synchrotron and SSC (and IC/CMB) processes of the relativistic electrons in radiation regions.The emission region is assumed to be a sphere with a radius R, magnetic field strength B, the bulk Lorenz factor Γ, and the Doppler boosting factor δ. The electron distribution is assumed as a broken PL function, characterized by an electron density parameter N 0 , a break energy γ b , and indices p 1 and p 2 in the range of g min [ , g max ].The synchrotron-self absorption, Klein-Nishina effect, and the extragalactic background light (EBL) absorption (Franceschini et al. 2008) are also taken into account during the SED modeling.
Core Region
We only consider the synchrotron and SSC processes to reproduce the observed SED of the core region, the same model as the previous works for NGC 6251 (Chiaberge et al. 2003;Migliori et al. 2011;Xue et al. 2017).The size of the radiation region is determined as R = δcΔt/(1 + z), where Δt is set to 10 days and c represents the speed of light.The values of p 1 and p 2 are obtained based on the spectral indices at the radio/X-ray and γ-ray bands, respectively.g min cannot be constrained and is set as 200, while g max is roughly constrained by the last data point in the γ-ray band.We adjust the parameters B, δ, γ b , and N 0 to represent the observed SED of the core region.It should be noted that these parameter values may not be unique, in particular, B and δ are degenerate (Zhang et al. 2012).The fitting parameters are given in Table 2.
The model can well reproduce the observed SED of the core region, as illustrated in Figure 4. We obtain δ = 3.4, which is lower than the typical values observed in blazars, coinciding with the uniform model of RL AGNs (Urry & Padovani 1995).The same model has been previously employed to explain the archival SEDs of the core region for NGC 6251, where B = 0.03 G and δ = 3.2 were reported (Chiaberge et al. 2003), as well as B = 0.037 G and δ = 2.4 (Migliori et al. 2011).Our The magenta, cyan, and green dashed lines represent the synchrotron, SSC, and IC/CMB components, respectively.The black solid lines show the sum of each emission component.For the core region, the data from radio to optical-UV (gray circles) are taken from Migliori et al. (2011), where the gray solid circles indicate the data corrected the dust extinction.The olive circles are the Swift-XRT data obtained in this work.The red symbols represent the Fermi-LAT data and are the same as that in Figure 3.For the NW lobe, the radio (gray opened circles) and X-ray (brown opened circles) data are taken from Takeuchi et al. (2012) and Persic & Rephaeli (2019).The red symbols represent the analysis in this work and are taken from Figure 3. 6.4 × 10 3 3 × 10 5 g max 1.9 × 10 5 3.5 × 10 6 N 0 (cm −3 ) 3.8 × 10 5 7 × 10 −7 p 1 2.64 2.4 p 2 3.52 4 findings of B = 0.04 G and δ = 3.4 are roughly consistent with theirs.
NW Lobe
We employ the synchrotron+SSC+IC/CMB processes to fit the broadband SED of the NW lobe.Considering no relativistic motion for the giant lobe, δ = Γ = 1 is adopted.In the comoving frame, the CMB energy density is given by & Schlickeiser 1994;Georganopoulos et al. 2006), where U CMB = 4.2 × 10 −13 erg cm −3 .The radius of the radiation region is taken as R = 185 kpc (Takeuchi et al. 2012).The values of p 1 and p 2 are determined from the spectral indices at radio and γ-ray bands, respectively.g min cannot be constrained and is set as 100 while g max is roughly constrained by the last data point in the γray band.We adjust the values of parameters B, γ b , and N 0 to reproduce the observed data.The fitting parameters are provided in Table 2.
As depicted in Figure 4, the emission from the X-ray to γ-ray bands is dominantly produced by the IC/CMB process.In comparison to the IC/CMB component, the contribution from SSC is negligible.The previous study utilized the IC/(CMB +EBL) model, employing a double broken PL electron distribution to represent the X-ray data of the NW lobe in NGC 6251 (Takeuchi et al. 2012), and they yielded B = 0.37 μG and g = 10 , and the slope index of q = 2.5.Our results are consistent with theirs.
Discussion and Conclusions
The RG NGC 6251 is reported to be associated with the γray source 4FGL J1630.6+8234, and 4FGL J1630.6+8234 is identified as a PS in 4FGL-DR4 (Abdollahi et al. 2022;Ballet et al. 2023).We comprehensively analyze the ∼15 yr Fermi-LAT observation data of 4FGL J1630.6+8234 and observe that the giant lobes dominate the detected γ-rays of NGC 6251.By comparing the maximum likelihood values between the RMT model analysis and a PS model analysis, we find that the RMT model analysis is preferable to describe the γ-ray emission of NGC 6251 at a confidence level of 8.1σ.
The spectrum of the core region is better described by an LP function compared to a PL function, while the spectra of the two lobes can be well explained with a PL form.The distinct spectral shapes between the core region and lobes also demonstrate the different origins of γ-rays.The γ-ray flux of the NW lobe is even higher than that of the core region, accounting for more than 50% of the total γ-ray flux.However, the emission flux from the SE lobe is very low, only being ∼7% of the total flux.The TS value of the SE lobe is only 30.96, while it is TS ∼600.31 for the NW lobe and TS ∼508.55 for the core region.
Neither the core nor the NW lobe displays significant flux variation in their long-term γ-ray light curves.However, X-ray observations of NGC 6251 using Swift-XRT indeed show flux variations at a confidence level of 4.9σ (Figure 6 in Appendix B), which are likely attributed to the core-jet emission.It is worth noting that the core region in the RMT model encompasses the straight jet; the low-level γ-ray flux variations originating from the radio core might be concealed by the radiation of the straight jet.This further supports the conclusion that the dominant contribution to γ-ray fluxes observed from 4FGL J1630.6+8234stems from the extended jet substructures of NGC 6251, surpassing that of the radio core.
On the basis of the derived average spectra in the X-ray and γ-ray bands, we construct broadband SEDs of the NW lobe and core region and fit them with a one-zone leptonic model.The γrays of the NW lobe together with its X-ray emission are attributed to the IC/CMB by the relativistic electrons, while the γ-rays of the core region are produced by the SSC process.The derived parameter values of radiation regions are consistent with previous works that studied NGC 6251.
Following the examples of Cen A (Abdo et al. 2010a) and Fornax A (Ackermann et al. 2016), NGC 6251 is the third RG with measured extended γ-ray emission.Our findings would further improve the cognition of the diversity γ-ray emitters in the γ-ray band.The γ-rays from the large-scale extended regions of the three RGs can be measured only due to their close proximity and the large angular size.The low spatial resolution of γ-ray detectors makes it difficult to judge the location of the γ-ray emission for most sources.On this point, the γ-ray emission from the large-scale jets of RGs may be universal, as predicted by some theoretical works (Zhang et al. 2009(Zhang et al. , 2010(Zhang et al. , 2018)).On the other hand, most detected extragalactic sources in the GeV-TeV γ-ray band are blazars due to their aligned jets with the strong Doppler amplification of their emission.The lack of the relativistic effect in the largescale jets implies that the detected γ-rays from these regions are intrinsically strong.Therefore, the large-scale jets of AGNs may provide more energy input to the intergalactic medium in the γ-ray band (H.E. S. S. Collaboration et al. 2020).
A.2. Estimation of the γ-Ray Variability
To quantify the variability of γ-ray light curves, we calculate χ 2 and the associated probability p(χ 2 ) = 1 − p(>χ 2 ; Chen et al. 2022 and references therein) of the long-term γ-ray light curves, å å where N is the number of the data points, 〈F〉 is the weighted mean flux, and F i and σ i are the flux and its error for the ith data point.However, none of the γ-ray light curves presented in this paper exhibits significant variability at a confidence level exceeding 2σ.
Appendix B Swift-XRT Observations and Data Analysis
The XRT on board the Neil Gehrels Swift Observatory (Swift, Gehrels & Swift 2004;Burrows et al. 2005) has performed a total of 18 observations for NGC 6251 in the photon counting readout mode, spanning from 2007 April to 2023 October.One observation was unable to capture the radio position of NGC 6251, due to it being outside the view field of Swift-XRT.For two other observations, poor quality data were obtained.Therefore, our analysis is based on a total of 15 observation data, as listed in Table 3.The data are processed using the XRTDAS software package (v.3.7.0),where the software package was developed by the ASI Space Science Data Center and released by the NASA High Energy Astrophysics Archive Research Center in the HEASoft package (v6.30.1).The calibration files from XRT CALDB (v20220803) are used within xrtpipeline to calibrate and clean the events.The individual XRT event files are merged together using the xselect package, and then the average spectrum is created from the combined event file.Events for the spectral analysis are extracted from a circle centered on the radio core of NGC 6251 with a radius of 20″.The background is taken from an annulus with an inner and outer radii of 40″ and 80″.The ancillary response files, which are applied to correct the pointspread function losses and CCD defects, are generated with the xrtmkarf task using the cumulative exposure map.The spectrum is grouped to ensure at least 20 counts per bin and the χ 2 minimization technique is adopted for spectral analysis.The spectrum is fitted by a single PL with two absorption components, one is absorption at z = 0 with the neutral hydrogen column density fixed at Galactic value of We generate the XRT spectra at other epochs following the procedures mentioned above.The derived photon index (Γ X ) and corrected flux (F 0.5-10 keV ) for the 15 observational epochs are given in Table 3.The X-ray light curve obtained with the XRT observations is presented in Figure 6.Using Equations (A1) and (A2) above, we obtain 〈F〉 0.5-10 keV = 2.50 × 10 −12 erg cm −2 s −1 , χ 2 = 57.4(d.o.f = N − 1 = 14) with p = 3.4 × 10 −7 , indicating that there is variability of X-rays at a confidence level of 4.9σ.
We also produce a combined average spectrum by incorporating the latest 11 observational data spanning from 2022 September to 2023 October, which are all corrected values, as displayed in Figure 4.It is worth noting that the first three Swift-XRT observational data were previously reported by Migliori et al. (2011), and our findings are consistent with them within the errors.
Figure 1 .
Figure1.Spectrum and light curve of 4FGL J1630.6+8234 when considering it as a PS.They are all derived with the ∼15 yr Fermi-LAT observation data in the 0.1-300 GeV band.The solid red line and gray dash lines (in left panel) represent the fitting result of the spectrum and the corresponding 1σ uncertainty, respectively.If TS < 4, an upper limit is presented for that energy bin.The light curve (in the right panel) is obtained with an adaptive-binning method based on a criterion of TS 9 for each time bin, where the minimum time bin step is 30 days.The horizontal red dashed line is the ∼15 yr average flux, i.e., F 0.1−300 GeV = (13.40± 0.51) × 10 −12 erg cm −2 s −1 .
Figure 2 .
Figure 2. 1°. 5 × 1°. 5 γ-ray counts map of 4FGL J1630.6+8234 in the 0.1-300 GeV band.The color bar represents the counts in each pixel with a pixel size of 0°.01.The map has been smoothed with a Gaussian kernel of 0°.25.The green contours denote the large-scale radio structure of NGC 6251 obtained with the WSRT observations (Mack et al. 1997).The black cross represents the position of the radio core in NGC 6251.The red circle indicates the 95% error circle of the reestimated best-fit position for 4FGL J1630.6+8234obtained with the ∼15 yr Fermi-LAT observation data in this paper.The 95% error ellipses of 4FGL J1630.6+8234from 1FGL to 4FGL (marked respectively with gray, brown, blue, and black ellipses) are also presented for comparison.
Figure 3 .
Figure3.Spectra and light curves of the core region (the top panels) and NW lobe (the bottom panels) for NGC 6251.They are all derived with the ∼15 yr Fermi-LAT observation data in the 0.1-300 GeV band.The solid red lines and gray dashed lines (in two left panels) depict the fitted spectra along with their respective 1σ uncertainties.If TS<4, an upper limit is presented for that energy bin.The light curves are derived with an adaptive-binning method based on a criterion of TS 9 for each time bin, where the minimum time bin step is 60 days.The data points of each time bin in the two light curves are simultaneously obtained.The horizontal red dashed lines represent the ∼15 yr average flux for each region, i.e., F 0.1−300 GeV = (4.86 ± 0.61) × 10 −12 erg cm −2 s −1 for the core region and F 0.1−300 GeV = (7.67 ± 0.76) × 10 −12 erg cm −2 s −1 for NW lobe.
Figure 4 .
Figure4.Broadband SEDs of the core region (left panel) and NW lobe (right panel) along with the fitting results.The magenta, cyan, and green dashed lines represent the synchrotron, SSC, and IC/CMB components, respectively.The black solid lines show the sum of each emission component.For the core region, the data from radio to optical-UV (gray circles) are taken fromMigliori et al. (2011), where the gray solid circles indicate the data corrected the dust extinction.The olive circles are the Swift-XRT data obtained in this work.The red symbols represent the Fermi-LAT data and are the same as that in Figure3.For the NW lobe, the radio (gray opened circles) and X-ray (brown opened circles) data are taken fromTakeuchi et al. (2012) andPersic & Rephaeli (2019).The red symbols represent the analysis in this work and are taken from Figure3.
max 6 (
corresponding to the value of γ b in our work).Using a similar model with a PL electron distribution to fit the SED, Persic & Rephaeli (2019) obtained B = 0.40 μG, g = 1.1 10 max 6
Figure 5 .
Figure5.The 1°. 5 × 1°. 5 γ-ray counts map of 4FGL J1630.6+8234 in different energy bands.The top left, top right, bottom left, and bottom right panels are in the energy bands of 0.1-3 GeV, 1-3 GeV, 3-10 GeV, and 10-300 GeV, respectively.The symbols remain unchanged from those in Figure2, with the exception of the red circle.Here, the red circle represents a region centered on the position of the radio core with a radius of ¢ 4. 5. Note that the red circle is subtracted and not included for the NW lobe template in the RMT model.
Figure 6 .
Figure 6.Top panel: the ∼15 yr γ-ray light curve in the 10-300 GeV band of 4FGL J1630.6+8234 when considering it as a PS, each time bin is 720 days and the triangles indicate TS <9 for this time bin.Bottom panel: the X-ray light curve of NGC 6251, obtained by the Swift-XRT observation data.The horizontal red dashed line represents the average flux of all the data points, 〈F〉 0.5-10 keV = 2.50 × 10 −12 erg cm −2 s −1 .
Table 1
Test Results for Different Spatial Templates aThe LP spectral form.bThePL spectral form.
Table 2
Fitting Parameters of SED Fitting | 7,870.8 | 2024-04-01T00:00:00.000 | [
"Physics"
] |
The ASCIZ-DYNLL1 axis promotes 53BP1-dependent non-homologous end joining and PARP inhibitor sensitivity
53BP1 controls a specialized non-homologous end joining (NHEJ) pathway that is essential for adaptive immunity, yet oncogenic in BRCA1 mutant cancers. Intra-chromosomal DNA double-strand break (DSB) joining events during immunoglobulin class switch recombination (CSR) require 53BP1. However, in BRCA1 mutant cells, 53BP1 blocks homologous recombination (HR) and promotes toxic NHEJ, resulting in genomic instability. Here, we identify the protein dimerization hub—DYNLL1—as an organizer of multimeric 53BP1 complexes. DYNLL1 binding stimulates 53BP1 oligomerization, and promotes 53BP1’s recruitment to, and interaction with, DSB-associated chromatin. Consequently, DYNLL1 regulates 53BP1-dependent NHEJ: CSR is compromised upon deletion of Dynll1 or its transcriptional regulator Asciz, or by mutation of DYNLL1 binding motifs in 53BP1; furthermore, Brca1 mutant cells and tumours are rendered resistant to poly-ADP ribose polymerase (PARP) inhibitor treatments upon deletion of Dynll1 or Asciz. Thus, our results reveal a mechanism that regulates 53BP1-dependent NHEJ and the therapeutic response of BRCA1-deficient cancers.
T o counteract the potentially carcinogenic effects of DNA damage and mutation, cells employ a complex network of DNA repair pathways 1 . DNA double-strand breaks (DSBs) are among the most toxic of genomic lesions. They arise from a variety of endogenous and exogenous sources, including ionizing radiation (IR) treatments, replication fork collapse, and as programmed intermediates of antigen receptor gene rearrangements during variable, diversity and joining (V(D)J) recombination and class-switch recombination (CSR) 2,3 . DSBs are predominantly repaired by homologous recombination (HR) and nonhomologous end joining (NHEJ) pathways. HR is an essential DNA repair pathway owing to its utility in the repair of DSBs encountered during the S/G2-phases of proliferating cell populations 4 , and mutations affecting its fidelity are associated with tumourigenesis 1 . HR initiation involves the regulated nucleolytic resection of DSB termini in a 5ʹ-3ʹ direction, which is an important determinant of whether a given break is repaired by HR or NHEJ 5 . The resulting 3ʹ-tailed ends are used to invade homologous sequences on the sister chromatid, which is used as a template for accurate repair. The availability of a sister chromatid in only replicated regions of the genome ensures that HR repair is largely restricted to S and G2 phase cells, while NHEJ predominates in G1 3 . NHEJ instead involves the direct ligation of DSB ends, a mechanism utilized by developing and antigenstimulated lymphocytes to generate programmed deletions during the diversification of antigen receptor genes. However, inappropriate activity of the NHEJ pathway is also known to promote genomic rearrangements and translocations associated with the onset of cancer. Activation of the appropriate pathway for a given DSB type, cellular context, or genomic locus is therefore crucial for the maintenance of genome stability and avoidance of deleterious errors or mutations 5 .
TP53-binding protein 1 (53BP1) is a DSB responsive protein that mediates an important and specialised branch of the NHEJ pathway 6 . 53BP1 is rapidly recruited to DSB sites, where its interaction with modified nucleosomes in DSB-flanking chromatin functions to inhibit nucleolytic DNA end resection, favouring DNA joining by NHEJ. This involves binding to a combinatorial chromatin signature consisting of methylated histone H4 (H4K20me1/2) and ubiquitinated H2A (H2AK15Ub) within DSB-associated chromatin domains 7,8 , and the recruitment of downstream effector proteins such as RIF1 (RAP1 interacting factor 1), whose binding to 53BP1 is required for efficient DNA end-protection and NHEJ [9][10][11] . RIF1-53BP1 complexes are essential for immunoglobulin (Ig) CSR, an NHEJdriven deletional recombination reaction in activated B cells that mediates the replacement of excised default IgM-encoding constant (C) gene segments of the Ig heavy locus (Igh), with downstream C segments of different class and function (e.g., IgG, IgE and IgA). Mice deficient in 53BP1 or RIF1 are immunodeficient owing to CSR failure 9,[12][13][14] , defects that manifest as a result of the aberrant hyper-resection of Igh DSBs that generate ssDNA intermediates non-amenable to NHEJ 9,14,15 . The 53BP1 pathway also plays an equivalent but pathological role in DNA end-joining at de-protected telomeres: telomeric DNA ends exposed upon disruption of the telomere capping complex Shelterin are predominantly repaired by 53BP1-dependent NHEJ, resulting in chromosome end fusions 16 . Accordingly, uncapped telomeric DNA ends are hyper-resected in 53bp1-and Rif1-deficient cells, resulting in a near complete suppression of telomere endjoining 11,17 . 53BP1-dependent NHEJ is also problematic in BRCA1-deficient cancers, where it mediates chromosomal rearrangements that drive oncogenesis 18,19 . In mice, germline 53bp1-deletion suppresses the embryonic lethality and mammary tumourigenesis associated with homozygous Brca1 loss-of-function mutations, a rescue explained by reactivation of HR 19 . Conversely, 53BP1 pathway-associated DSB repair activities underlie the synthetic lethal effect of poly-ADP ribose polymerase (PARP) inhibitor (PARPi) treatments in BRCA1 mutation-associated cancers: genetic ablation of 53BP1 pathway components results in PARPi resistance in cellular and tumour models of BRCA1deficiency 9,20,21 .
While mechanistically distinct, both NHEJ and p53-regulatory functions require 53BP1 multimerization 22,26,27 , a function thought to rely entirely on its oligomerization domain (OD; encoded within amino acid residues (a.a.) 1231-1277) 28 . NHEJdeficits have been attributed to an inability to recruit 53BP1 to DSB-associated chromatin, as OD mutation/deletion blocks the retention of 53BP1 protein fragments at DNA damage sites, and also compromises their interaction with modified nucleosome core particles in vitro 8,28,29 . Perhaps unexpectedly, we and others recently reported that the recruitment of a full-length 53BP1 protein into IR induced foci (IRIF) was only modestly impacted by OD-deletion or mutation 22,27 . This prompted us to explore the molecular basis and function of OD-independent 53BP1 recruitment.
Here, we reveal that 53BP1 can interact with DNA damage sites independently of its OD, and this recruitment depends on interaction with the multifunctional homodimeric protein hub dynein light chain 1 (DYNLL1). We demonstrate that DYNLL1 binding promotes ordered 53BP1 oligomerization and is essential for efficient CSR and adaptive immunity in mice, while also contributing to 53BP1-dependent p53 regulation. In addition, we show DYNLL1-53BP1 interplay plays an essential role in mediating toxic NHEJ events in Brca1 mutant cancer cells: deletion of DYNLL1 or its transcriptional regulator ATM substrate Chk2-interacting Zn 2+ finger protein (ASCIZ; also known as ATMIN/ZNF822) is strongly selected for in BRCA1deficient tumour cells, and results in PARPi resistance in vitro and in vivo. Thus, our findings identify an important mechanism in which DYNLL1 directly regulates the fidelity of 53BP1dependent NHEJ.
Results
DYNLL1 is essential for OD-independent 53BP1 recruitment. The minimal domain requirements for 53BP1 DSB localization were defined in the context of a 53BP1 fragment encompassing a. a. 1220-1711 30,31 . This fragment spans 53BP1's nuclear localization sequence (NLS), OD, tandem-tudor, and ubiquitindependent recruitment (UDR) domains, all of which were essential for its recruitment into IRIF 8,28 . We were therefore surprised that complete deletion of its OD domain (53BP1 ODΔ ) or Ala mutation of four key residues for OD activity (53BP1 ODm ) only negligibly impacted on 53BP1 IRIF frequencies, despite completely blocking recruitment in the context of truncated 53BP1 1220-1711 proteins (Fig. 1a). To map the sequences in 53BP1 that mediate OD-independent recruitment, IRIF localization patterns were examined across a panel of truncated 53BP1 ODm proteins in which 53BP1 sequences N-and C-terminal to a.a. 1220-1711 were restored ( Fig. 1b and Supplementary Fig. 1a). We found that the addition of 113 residues N-terminal to 53BP1 1220-1711 rescued 53BP1 ODm recruitment into IRIF (Fig. 1b), implicating a.a. 1107-1219 in OD-independent 53BP1 recruitment.
Interestingly, alanine substitutions at 28 N-terminal [Ser/Thr]-Gln consensus ATM/ATR phosphorylation site motifs in the fulllength 53BP1 ODm protein recapitulated the effect of a.a. 1107-1219 deletion on IRIF recruitment (Fig. 1b), and suggested that some of these Ser-Gln or Thr-Gln motifs may be involved in the OD-independent 53BP1 recruitment to DSBs. We noted that Thr-1171, one of the three Ser-Gln/Thr-Gln motifs in the region critical for OD-independent IRIF formation (a.a. 1107-1219), also serves as a binding site (consensus Thr-Gln-Thr) for the dynein light chain protein DYNLL1 (LC8) 32 . As DYNLL1 is known to function as a ubiquitous sequence-specific dimerization hub for a plethora of diverse proteins 33,34 , we considered a role for DYNLL1 in the OD-independent 53BP1 recruitment mechanism. 53BP1 a.a. 1107-1219 also contains a second DYNLL1-binding site 32 , and indeed, alanine substitution of the three anchor residues (GIQ and TQT, respectively) in each of the two DYNLL1-binding site motifs of 53BP1 (53BP1 LC8m ) completely blocked OD-independent 53BP1 IRIF formation ( Fig. 1c-d). Moreover, we found that DYNLL1 colocalized with 53BP1 in nuclear foci and that this interaction was abolished by the 53BP1 LC8m mutations ( Supplementary Fig. 1b). However, the 53BP1 LC8m mutation did not interfere with OD-mediated oligomerization in vitro ( Supplementary Fig. 2a), or downstream RIF1 recruitment ( Supplementary Fig. 2b, c). Thus, collectively, these data indicate that direct binding of DYNLL1 to the LC8binding motifs is critical for the OD-independent recruitment of 53BP1 to DSB sites.
To further explore DYNLL1's role in modulating 53BP1 functions, we used CRISPR-Cas9 to mutagenize the DYNLL1 gene in 53BP1 -/-MCF-7 cell populations stably complemented with wild type (WT) 53BP1 or 53BP1 ODm . DYNLL1-depletion in these populations (without selection for stable DYNLL1 knockout clones) caused a near complete block in 53BP1 ODm foci formation ( Fig. 1e and Supplementary Fig. 2d, p < 0.0001), and residual 53BP1 IRIF frequencies in these experiments correlated to cells of increased DYNLL1 nuclear staining intensity ( Supplementary Fig. 2e). In contrast, DYNLL1-depletion did not affect the frequencies of WT 53BP1 IRIF when the OD was intact ( Fig. 1e; p = 0.1363). These experiments therefore suggested that DYNLL1-dependent binding to the LC8 motifs in 53BP1 promotes 53BP1 oligomerization, explaining the mechanism of OD-independent 53BP1 recruitment to IRIFs.
DYNLL1-53BP1 interactions are DNA damage-and cell cycleindependent. As indicated above, one of the two LC8 motifs (LC8 motif 2) in 53BP1 also comprises an ATM/ATR phosphorylation site (Thr-1171, consensus Thr-Gln) that is phosphorylated following IR-treatment 35 , and a second putative ATM phosphorylation site (Ser-1148) is positioned only 5 a.a. residues upstream of LC8 motif 1. Thus, we considered that 53BP1-DYNLL1 interactions might be DNA damage regulated. To test this, Flag-HA tagged WT 53BP1 and 53BP1 LC8m mutant protein complexes were immunoprecipitated from lysates prepared from irradiated and untreated stable cell lines and analyzed for the presence of DYNLL1. In confirmation of a role for the 53BP1 LC8 motifs in mediating DYNLL1 interactions, DYNLL1 co-precipitated with wild type, but not LC8-motif mutant 53BP1 complexes (Fig. 1f). Notably, DYNLL1-53BP1 interactions were not significantly altered upon IR-treatments (Fig. 1f), discounting a role for Thr-1171 phosphorylation in regulating DYNLL1-53BP1 interactions. In addition, mutation of Ser-1148, which would be expected to form the N-terminal residue of the fully extended~8-residue beta-strand interacting with the DYNLL1-binding groove based on known structures for DYNLL1-target complexes 36 , by itself had only a modest effect on DYNLL1 binding. Mutation of Thr-1171, however, reduced DYNLL1-binding to a greater extent, which was further reduced in concert with the Ser-1148-Ala mutation, albeit not to the same extent as the 53BP1 LC8m interaction-blocking mutation in which the entire 3-residue anchor motifs are disabled ( Supplementary Fig. 1c). Moreover, combined alanine substitutions at Thr-1171 and Ser-1148 resulted in the near total loss of OD-independent 53BP1 IRIF (Supplementary Fig. 1d). The most likely explanation for these results is that mutation of these two phosphorylation site motifs impacts the interaction with DYNLL1 by weakening its binding sites in 53BP1 rather than by impairing their phosphorylation state, reconciling the IRIF recruitment defect of the 53BP1 ODm,28A mutant protein (Fig. 1b).
Next, we examined whether DYNLL1-53BP1 interactions might be regulated by cell cycle position. Cell synchronisation experiments confirmed DYNLL1 did not exhibit cell-cycledependent fluctuations in expression during interphase (Supplementary Fig. 3a), or mitosis ( Supplementary Fig. 3b). To determine whether 53BP1 might be regulated by cell-cycle dependent modification of DYNLL1, or otherwise, 53BP1 IRIF were quantified in G1, S and G2 cell-cycle stage classified populations in WT and mutant complemented 53BP1 -/-MCF-7 stable cell lines ( Supplementary Fig. 3c). Here, the 53BP1 LC8m mutation did not significantly impair foci formation at any cell cycle phase ( Supplementary Fig. 3d). Likewise, 53BP1 ODm IRIF frequencies, that rely entirely on DYNLL1-interactions (see Fig. 1c-f), were equally reduced at all cell cycle phases ( Supplementary Fig. 3d). Collectively, these experiments indicate DYNLL1-53BP1 interactions are likely to be constitutive, and suggested that DYNLL1 might represent an integral component of 53BP1 complexes.
53BP1-DYNLL1 interactions are required for class-switch recombination. We next examined the function of DYNLL1 and DYNLL1-53BP1 interactions during CSR, which relies on 53BP1-mediated NHEJ 12,13 . DYNLL1 is essential for normal B cell development, and its deletion in the early B cell lineage using Mb1-Cre leads to dramatic losses in circulating and mature splenic B cell populations ( Supplementary Fig. 4a) 37 . We therefore used transgenic Cd23-Cre driver (Cd23-Cre Tg ) to delete Dynll1 in mature B lymphocytes in Dynll1 F/F mice, which supported the development of normal frequencies of mature splenic B cells in which DYNLL1 protein was efficiently depleted (Supplementary Fig. 4b, c). Cultured Dynll1-deleted B splenocytes upon stimulation with IL-4 and LPS showed a reduction in classswitching to IgG1, and IgG1 switching frequencies in Cd23-Cre Tg Dynll1 F/F cells were consistently reduced by >50% relative to Cd23-Cre Tg controls in all cell populations that had undergone equivalent numbers of cell divisions (Fig. 2a, b). This was independently confirmed by Cd23-Cre-mediated deletion of the Asciz gene in mouse mature B cells, which encodes DYNLL1's transcriptional regulator ASCIZ (ATMIN) 38,39 and resulted in greatly reduced DYNLL1 expression and an equivalent reduction in class-switching efficiency ( Supplementary Fig. 4d, e), consistent with previous results in B cells from Cd19-Cre Asciz-mutant mice 40 . The fact that CSR defects in Dynll1-and Asciz-deleted B cells could not be explained by the defective expression of Igh switch region germ-line transcripts ( Supplementary Fig. 4f), nor the magnitude of proliferation defects in Dynll1-deleted B cells (Fig. 2b), was consistent with a role for DYNLL1 and its regulator ASCIZ in the end joining phase of CSR.
To exclude the possibility that indirect, 53BP1-independent, consequences of DYNLL1-deletion could account for CSR defects in Dynll1-deficient B cells, we next monitored the effect of LC8 binding site mutations on 53BP1-dependent CSR. LPS/IL-4induced class-switching to IgG1 was analysed in stimulated 53bp1 -/primary B splenocytes upon reconstitution with retroviruses that express wild type 53BP1, or the 53BP1 ODm , 53BP1 LC8m , and 53BP1 LC8m/ODm mutant proteins, or as a negative control, the NHEJ-defective 53BP1 L1619A UDR mutant 8 ( Fig. 2c; note that to avoid the inefficient packaging of large 53BP1 inserts in retroviral particles, all rescue constructs express a truncated 53BP1 protein (a.a. 1-1711) that supports WT CSR frequencies 26 ). As expected, 53BP1 1-1711 expression rescued class switching in 53bp1 -/-B cell cultures, while the repair-deficient 53BP1 L1619A mutant could not (Fig. 2c, d). In contrast, 53BP1 LC8m -reconstitution only restored CSR to~50% of WT (Fig. 2c, d), a defect consistent in magnitude to Dynll1-deficient B cells, confirming DYNLL1 mediates CSR via its regulation of 53BP1 complexes and their NHEJ activities. As expected 26 , 53bp1 -/-B cells complemented with a mutant OD-domain allele (53BP1 ODm ) failed to restore CSR to any significant degree (Fig. 2c, d). Thus, while DYNLL1 can mediate OD-independent 53BP1 DSB recruitment, OD-independent recruitment alone is unable to support NHEJ during CSR, confirming a cooperation between OD-and DYNLL1-dependent oligomerization in the assembly of NHEJ-competent 53BP1 oligomers.
DYNLL1 stimulates 53BP1 oligomerization to promote optimal chromatin interactions. To ascertain why DYNLL1mediated 53BP1 recruitment alone is insufficient for DNA repair, we investigated DYNLL1's contribution to 53BP1chromatin interaction dynamics at DSB sites. Stable 53BP1 -/-MCF-7 cell lines were established that expressed equivalent levels of an mC2 (mClover2)-tagged 53BP1 a.a. 1107-1711 fragment, comprising WT (mC2-f53BP1) or mutated versions of the OD (mC2-f53BP1 ODm ) or LC8 motifs (mC2-f53BP1 LC8m ). The mobility of each protein in IRIF was then calculated in fluorescent recovery after photobleaching (FRAP) experiments (Fig. 3a). Fluorescence recovery of mC2-f53BP1 ODm was >3-fold faster than that of WT mC2-f53BP1 protein, as calculated from the half-time for recovery to maximum fluorescence: t 1/2 = 47.3 s ± 18.12 and 163.25 s ± 46.9, for mC2-f53BP1 ODm and mC2-f53BP1 proteins, respectively (Fig. 3b, c). Thus, DYNLL1-mediated 53BP1 oligomerization alone is insufficient for stable chromatin interaction at DSB sites. These results confirm canonical DYNLL1-independent oligomerization is primarily responsible for 53BP1 interactions with DSB-associated chromatin. To control for variation in fluorescence recovery endpoints between experimental repeats (Fig. 3d), we analyzed the rate of fluorescence recovery in a manner that, unlike t 1/2 , was not related to the c d endpoint. Thus, we computationally modelled the fluorescence recovery data of three independent experiments (each comprising n > 8 cells per genotype) to calculate the initial rate of fluorescence recovery immediately after photobleaching (Fig. 3d). Initial rate calculations provide a concentration-independent measurement of fluorescence recovery that is independent of fluorescence recovery endpoint. Consistent with t 1/2 measurements, the initial recovery rates of each protein reproduced the trends seen in Fig. 3b, c, with large, and moderate increases in the recovery rates of mC2-f53BP1 ODm and mC2-f53BP1 LC8m mutant proteins, respectively, relative to WT mC2-f53BP1 (Fig. 3e). These data confirmed an important role for DYNLL1 in stabilizing ODdependent 53BP1-chromatin interactions.
To determine the mechanism by which DYNLL1 contributes to 53BP1 function, we next probed DYNLL1's ability to promote 53BP1 oligomerization in vitro. Thus, purified DYNLL1 was titrated into binding reactions that contained a recombinant monomeric 53BP1 protein fragment encoding the two LC8binding motifs and a mutated OD domain (a.a. 1131-1292). Reactions were then resolved by native PAGE to ascertain DYNLL1's ability to stimulate OD-independent 53BP1 oligomerization. In these experiments, increasing DYNLL1 to levels equimolar and above that of the 53BP1 fragment, stimulated its robust oligomerization as determined by a marked increase in higher molecular weight protein complexes (Fig. 3f). In confirmation of a role for the 53BP1 LC8 binding motifs in mediating the assembly of DYNLL1-bridged 53BP1 oligomers, higher molecular weight complexes did not form when both LC8binding motifs in the 53BP1 fragment were mutated, and these reactions were characterised by an increased presence of ligand-free DYNLL1 dimers (Fig. 3f). Taken together, our data indicate that optimal 53BP1 oligomerization is mediated via a cooperation between its OD-domain (Fig. 1a, b), and the binding of DYNLL1 to its upstream binding sites in 53BP1 (Fig. 3f). We therefore conclude that the intermolecular affinity afforded by DYNLL1-53BP1 interactions is sufficient to promote 53BP1 oligomerization, yet in isolation is insufficient to efficiently tether 53BP1 complexes to DSB-associated chromatin. We instead propose that DYNLL1 may provide order to large oligomeric 53BP1 complexes, a function required for optimal chromatin interactions and its associated DNA repair function.
PARPi hypersensitivity in Brca1-deficient cells requires 53BP1-DYNLL1 cooperation. The fact that DYNLL1-mediated regulation of 53BP1 oligomerization enhances 53BP1-dependent CSR next prompted us to test its importance more broadly in 53BP1dependent NHEJ. The efficacy of PARPi in killing BRCA1-deficient cells is dependent on 53BP1, and its loss in BRCA1-deficient cell-lines and mice results in resistance to PARPi-induced cell death 9,19-21 . We therefore theorized that reduced efficiency of 53BP1-dependent NHEJ upon DYNLL1 depletion might likewise confer PARPi resistance to BRCA1-deficient cells. Using CRISPR-Cas9, we mutagenized the Dynll1 and Asciz genes in the KB1P-G3 Brca1 -/-p53 -/murine mammary tumour cell line 20 , and monitored locus-specific indels within cell populations by deconvolution of complex Sanger sequencing traces derived from Cas9 cleavage site-spanning PCR amplicons using the Tracking of Indels by Decomposition (TIDE) algorithm 41 ( Supplementary Fig. 5a). To determine whether mutation of Dynll1 or Asciz provided a selective advantage in KB1P-G3 cells due to PARPi resistance, the change in percentage of edited alleles between the starting population and the surviving fraction was compared, before, and after, outgrowth in the presence or absence of the PARPi olaparib ( Fig. 4a and Supplementary Fig. 5a). In all experiments in which the initial indel frequency at the Dynll1 and Asciz loci was below saturation (<85%) in treatment naive cells, olaparib treatments selected for striking increases in indel frequency ( Fig. 4b and Supplementary Fig. 5b). These effects were coupled to reductions in DYNLL1 protein levels when compared to untreated controls ( Supplementary Fig. 5c), collectively confirming that both proteins are required for the hypersensitivity of Brca1-deficient cells to PARPi. Cas9dependent mutagenesis of either gene in KB1P-G3 cells also conferred olaparib resistance in clonogenic survival experiments (Fig. 4c, d). Importantly, the Olaparib resistance in Asciz-mutated KB1P-G3 cells could be completely reversed by transgenemediated re-expression of exogenous DYNLL1 ( Supplementary Fig. 6a-c), confirming the importance of ASCIZ-mediated DYNLL1 transcription in the response of BRCA1-deficient cells to PARPi. However, neither selection for CRISPR-Cas9dependent Dynll1 or Asciz locus editing, nor reduced DYNLL1 expression, was observed when equivalent experiments performed in 53bp1 -/-KB1P-G3 cells ( Fig. 4e and Supplementary Fig. 5d, e), indicating that DYNLL1's role in mediating the PARPi sensitivity of Brca1-deficient cells occurs predominantly via its regulation of 53BP1 activities.
We next examined whether the ASCIZ-DYNLL1 axis was required for PARPi sensitivity in vivo. ASCIZ and DYNLL1 have previously been shown to have equivalent effects on the development and expansion of Myc-driven cancers 37,42 , thus we monitored the growth of Cas9-expressing Brca1 -/-p53 -/transplanted mouse cancer organoids 43 transduced with control or Asciz-targeting gRNA ( Supplementary Fig. 5f, g). Olaparib treatment significantly delayed the onset of tumour growth in transplanted animals relative to an untreated cohort. However, PARPi-dependent inhibition of tumour growth was attenuated in Asciz-edited tumours (Fig. 4f), and resulted in decreased overall survival relative to control olaparib-treated cohorts (Fig. 4g). These data demonstrated that regulation of 53BP1 by the ASCIZ-DYNLL1 axis supports the efficacy of olaparib treatments in selectively killing Brca1-deficient cancer cells, confirming the importance of DYNLL1-53BP1 interactions in regulating 53BP1dependent DNA repair.
DYNLL1-binding sites contribute to 53BP1-dependent Nutlin-3 sensitivity. In addition to its role in the DNA damage response, 53BP1 also contributes to p53-dependent transcriptional senescence programs and the corresponding cellular sensitivity to the treatments with MDM2 inhibitor Nutlin-3 22,44 . Interestingly, the 53BP1 LC8m mutant, in contrast to wild type 53BP1 22 , was compromised in their ability to restore the sensitivity of 53BP1 -/-MCF-7 to Nutlin-3 ( Fig. 5a-c). This agrees with a model in which DYNLL1 is likely to enforce optimal 53BP1 oligomerization, and is consistent with our published findings in which efficient 53BP1 oligomerization was similarly essential for 53BP1-dependent regulation of p53 22 .
Discussion
Taken together, our findings reveal DYNLL1 to be an integral component of oligomeric 53BP1 complexes through which it plays an important role in 53BP1-dependent NHEJ. We speculate that DYNLL1-binding to 53BP1 may provide order to 53BP1 oligomers within chromatin-binding domains, enhancing productive chromatin interactions that enhance effective resection inhibition and enable for efficient NHEJ. Deficiencies in either DYNLL1 or its binding sites in 53BP1, lead to substantial CSR defects. Given that 53BP1 is similarly important for resection inhibition and NHEJ during V(D)J recombination, a loss of which manifests in the defective development of B and T lymphocyte lineages [45][46][47] , it is plausible that 53BP1-associated V(D)J recombination defects that accompany loss of DYNLL1-53BP1 interactions might contribute to the B cell lineage development defects recently reported in Dynll1-deficient mice 37 . We also found that DYNLL1 contributes to DNA damage-independent 53BP1 functions in the response to the MDM2 inhibitor Nutlin-3 ( Fig. 5a-c). This agrees with a model in which DYNLL1 enforces optimal 53BP1 oligomerization, and is consistent with our published findings in which efficient 53BP1 oligomerization was similarly essential for 53BP1-dependent regulation of p53 22 . Our results therefore suggest bivalent oligomerization modes are necessary to coordinate the assembly of 53BP1 molecules into ordered, functionally competent multimeric complexes, insights that will be relevant for understanding other macromolecular complexes organized by DYNLL1 (Fig. 6). In identifying DYNLL1 and ASCIZ as essential mediators of 53BP1-dependent PARPi sensitivity in Brca1-deficient cancer cells, our findings provide a molecular explanation for the identification of both proteins as PARPi resistance factors in BRCA1-deficient cells in recently published genetic screens 48,49 , and reveal a previously unanticipated mechanism in which 53BP1 misregulation could lead to PARPi resistance in the clinic.
Methods
Cell lines and culture conditions. Cell lines used in this study are MCF-7 female human breast adenocarcinoma, RPE-1 female human retinal pigment epithelial, human female embryonic kidney (HEK) 293T cells (including the BOSC23 derivative cell line), and KB1P-G3 Brca1 -/-p53 -/cells which were isolated from a female mouse mammary tumour. All cell lines were checked for mycoplasma contamination. KB1P-G3 and the KB1P-G3 53bp1 -/derivative were both kindly provided by Sven Rottenberg (NKI, Netherlands and University of Bern). All cell lines were cultured in DMEM-high glucose (D6546, Sigma-Aldrich) supplemented with 10% FBS, Pen-Strep, and 2 mM L-glutamine. Cultures were grown at 37°C with 5% CO 2 . KB1-P3 (G3) and all derivative cells lines were additionally grown in 3% oxygen. Primary B cells were isolated from red blood cell-lysed single-cell suspensions of 53bp1 -/-C57BL/6J mouse spleens and cultured in RPMI supplemented with 10% FCS, 100 U/ml penicillin, 100 ng/ml streptomycin, 2 mM L-glutamine, 1x MEM nonessential amino acids, 1 mM sodium pyruvate and 50 μM β-mercaptoethanol. d Quantification of growth in olaparib (300 nM) after transduction with indicated guide RNAs. Quantification of n = 3 independent experiments, each with three technical replicates. Mean ± SD. Significance determined by unpaired two-tailed t test. e Change in the percent of edited alleles in 53bp1 -/-p53 -/-KB1P-G3 cells after growth in DMSO or olaparib (75 nM). Data, representative of n = 2 independent experiments. f Relative tumour volume of transplanted organoids after treatment with olaparib. n > 7 animals per condition. g Kaplan-Meier plot indicating survival of organoid-transplanted animals. Significance between sgNT-olaparib and sgASCIZ-olaparib was determined by Log-rank Mantel-Cox test (p = 0.0018). n > 7 animals per condition from blasticidin selection. Stable transgene expression was similarly achieved by two rounds of lentiviral transduction with viral supernatants generated using the same procedure.
In experiments involving EdU incorporation, cultures were pulsed with 40 μM EdU for 10 min immediately preceding irradiation. After fixation, EdU was labelled using the Click-iT™ EdU Alexa Fluor 647 Imaging Kit (Thermo Fisher, C10340) according to the manufacturer's protocol. This was done immediately prior to antibody staining as described above.
Images for the initial mapping of the LC8 domain within the 53BP1 N-terminus ( Fig. 1) were acquired on a Zeiss LSM510 META confocal imaging system (Fig. 1). RIF1 IRIF (Supplementary Fig. 2b, 2c) were analyzed on a Leica SP8 SMD X confocal microscope. All other fixed IRIF images were acquired on an Olympus Epi-fluorescence MMI CellEctor widefield microscope. Quantitative analysis of IRIF number and staining intensity was performed using Cell Profiler (Broad Institute). All image visualization and processing was done using Fiji.
Olaparib sensitivity assays. KB1P-G3 and KB1P-G3 53bp1 -/cells were subjected to two rounds of lentiviral transduction with viral supernatant generated using lentiCRISPR-Bsr. After selection in blasticidin (10 μg/ml), samples were collected to isolate genomic DNA (gDNA) immediately prior to seeding for olaparib sensitivity. Blasticidin-resistant populations were seeded in six-well plates at a density of 10 4 cells per well (5 × 10 3 for the KB1P-G3 53bp1 -/derivative) in the presence of olaparib or DMSO and grown at 37°C (5% CO 2 and 3% O 2 ). Medium was refreshed at 4 days and 8 days. After 10 days outgrowth, cultures were expanded in fresh medium for 1 week in 6 cm dishes before harvesting gDNA. PCR fragments encompassing the CRISPR cut site were amplified from gDNA using Q5 polymerase (NEB, M0491), sequenced (GATC Biotech), and analyzed by Tracking of Indels by Decomposition (TIDE: https://tide.deskgen.com/). Surviving cells were collected and three replicates were plated in DMSO and three in olaparib for viability analysis. DMSO and olaparib-treated cells were stained with crystal violet (0.5% (w/v) crystal violet in 25% methanol) after 8 days and 10 days growth (37°C, 5% CO 2 and 3% O 2 ), respectively. Crystal violet stained cells were dissolved in a 10% (v/v) acetic acid solution a minimum of 24 h after staining and the OD 595 was measured as a quantitative metric of relative growth.
Nutlin-3 sensitivity assay. Cells were seeded in triplicate at a density of 1.25 × 10 4 per well in a six-well plate in the presence (11 days growth) or absence (7 days growth) of Nutlin-3 (4 μM, Cayman Chemicals, added 16 h after seeding). Cells were stained with crystal violet and quantified as described for olaparib sensitivity assays. These experiments were performed in triplicate.
FRAP. 53BP1 -/-MCF-7 cells were stably transduced with an mClover2-tagged fragment of 53BP1 encompassing amino acids 1107-1711 of the wild type protein. Two days before FRAP, 2.5 × 10 5 cells were seeded on a 35 mm glass-bottom plate. Cells were irradiated (5 Gy) with an X-ray source to induce IRIF formation 2 h before imaging. Individual foci were selected and 10 prebleach images were acquired prior to a single bleach pulse with an Argon laser at a laser transmission of 100%. Immediately after bleaching, 120 images were acquired in 5 s intervals at 0.5% laser intensity. All acquisition files were processed in ImageJ with the Stackreg plugin to account for nuclear migration in the recovery period. Regions of interest encompassing the bleached area, the entire nuclei and a section of background were selected and mean intensity values were quantified for each in ImageJ. These values were then normalized and fitted using the easyFRAP software 52 . All acquisitions were performed on a Leica SP8 SMD X confocal microscope.
Initial rate values were calculated based on curve of best fit generated in MATLAB according to the following equation.
Curves of best fit were generated with the following coefficient values (with 95% confidence bounds), mC2-f53BP1: a = 0.3191 (0.3083, 0. . ODmediated oligomerization alone is sufficient for chromatin binding but results in severe DSB repair defects (top, right). DYNLL1-mediated oligomerization is also sufficient to support recruitment of 53BP1 to damage sites, but does not support DNA repair (bottom, left). Loss of both OD and DYNLL1-mediated modes of oligomerization eliminates 53BP1 recruitment to DNA damage sites and all associated repair functions (bottom, right) Expression and purification of 53BP1. Smt3-His 6 -tagged fragments (amino acids 1131-1292) of wild type (LC8-OD), OD-mutated (ODm-LC8), LC8-mutated (OD-LC8m), double mutant (LC8m-ODm) 53BP1, and DYNLL1 were expressed from pET-17b in E. coli BL21(DE3). Cells were lysed by sonication in 20 mM HEPES pH 7.5, 250 mM NaCl, and 0.25 mM TCEP. Lysates were clarified by centrifugation and the supernatant was applied to a TALON IMAC column ( Supplementary Fig. 2a) or Ni-NTA agarose beads (Fig. 3f). Bound complexes were washed with 20 mM HEPES pH 7.5, 250 mM NaCl, 0.25 mM TCEP, supplemented with 10 mM imidazole and then eluted by increasing the imidazole concentration to 250 mM. Equal concentrations of eluate were then fractionated over a Superdex S200 column ( Supplementary Fig. 2a).
In vivo tumourigenesis studies. All animal experiments were approved by the Animal Ethics Committee of The Netherlands Cancer Institute (Amsterdam, the Netherlands) and performed in accordance with the Dutch Act on Animal Experimentation (November 2014). KB1P4 tumour organoids were established previously and cultured in AdDMEM/F12 supplemented with 1 M HEPES (Sigma), GlutaMAX (Invitrogen), penicillin/streptomycin (Gibco), B27 (Gibco), 125 μM N-acetyl-L-cysteine (Sigma), 50 ng/mL murine epidermal growth factor (Invitrogen) (Duarte et al. 43 ). Tumour organoid transduction was performed by spinoculation with pLentiCRISPRv2 lentiviral constructs in which either no gRNA or a gRNA targeting ASCIZ was cloned. Following puromycin selection, modified organoids were collected, incubated with TripLE at 37°C for 5ʹ, dissociated into single cells, washed in PBS, resuspended in tumour organoid medium and mixed in a 1:1 ratio of tumour organoid suspension and BME in a cell concentration of 10 4 cells per 40 μl. Subsequently, 10 4 cells were transplanted in the fourth right mammary fat pad of 6-9-week-old female NMRI nude mice. Mammary tumour size was measured by caliper measurements and tumour volume was calculated (0.5 × length × width 2 ). Treatment of tumour bearing mice was initiated when tumours reached a size of 50-100 mm 3 , at which point mice were stratified into the untreated (n = 8) or olaparib treatment group (n = 8). Olaparib was administered at 100 mg/kg intraperitoneally for 80 consecutive days. Animals were sacrificed with CO 2 when the tumour reached a volume of 1500 mm 3 . Sample sizes were not pre-determined and no blinding or randomization was employed during analysis.
Quantification and statistical analysis. Prism 6 (GraphPad Software Inc.) was used for statistical analysis and production of all graphs and dot plots. The relevant statistical methods and measures of significance for each experiment are detailed in the figure legends. Cell Profiler (Broad Institute) was used for automated quantitation of immunofluorescence data. Normalization of FRAP recovery curves was performed using the easyFRAP software and curves of best fit were generated in MATLAB.
Data availability
The source data underlying Figs. 1d-f, 2b and d, 3b-d and f, 4d, f and g, 5a and b, and Supplementary Figures 1c and d, 2b-e, 3a-d, 4b-f, 5c and e, and 6a and c are provided as a Source Data file. All other original data and code that supports the findings of this study are available from the corresponding author upon reasonable request. | 7,721 | 2018-12-01T00:00:00.000 | [
"Biology"
] |
Coloring subgraphs with restricted amounts of hues
Abstract We consider vertex colorings where the number of colors given to specified subgraphs is restricted. In particular, given some fixed graph F and some fixed set A of positive integers, we consider (not necessarily proper) colorings of the vertices of a graph G such that, for every copy of F in G, the number of colors it receives is in A. This generalizes proper colorings, defective coloring, and no-rainbow coloring, inter alia. In this paper we focus on the case that A is a singleton set. In particular, we investigate the colorings where the graph F is a star or is 1-regular.
Introduction
Consider a (not necessarily proper) coloring of the vertices of a graph G. For a set S of vertices, denote by c.S / the number of colors used on the set S . Let F be some fixed graph and let A be some fixed subset of the positive integers (the allowed numbers). We consider colorings of G where for every copy of F in G the number c.V .F // is in the set A. (Note that F is not required to be an induced subgraph.) We call this a coloring with a Restricted Amount of Subgraph Hues, or RASH for short. Below we will simply refer to it as a valid coloring.
This idea has been studied in other contexts. Most obviously, proper colorings are the case that F D K 2 and A D f2g. Thereafter, probably most studied is the case of coloring the vertices without creating some monochromatic subgraph, such as a star; these are often called defective colorings (see for example [1][2][3][4]). Defective colorings correspond to RASH colorings where A D f2; 3; : : : ; jF jg (where we use jF j to denote the order of F ). More recently, at least in the setting of graphs, there is work on no-rainbow colorings [5][6][7], which correspond to RASH colorings where A D f1; 2; : : : ; jF j 1g, and on Worm colorings [8], which correspond to RASH colorings where A D f2; 3; : : : ; jF j 1g. These three types of colorings were already considered and generalized for hypergraphs; see [9,10].
In this paper we will focus on the case that A is a singleton set fag. That is, we consider colorings where every copy of F receives precisely a colors. And especially, we investigate the case that the subgraph F is a star or is 1-regular. We will assume throughout that the graphs are simple and have no isolates.
H isomorphic to F , then we let W C .G; F; A/ denote the maximum number of colors in such a coloring and let W .G; F; A/ denote the minimum number of colors in such a coloring. (In hypergraphs these are the upper and lower chromatic numbers, respectively.) Note that if G has a valid coloring then so does any subgraph; thus these parameters are monotonic. Now, if A contains the integer 1, then the existence is guaranteed and the minimum number of colors in a valid coloring is 1. Similarly, if A contains the integer jF j, then the existence is guaranteed and the maximum number of colors is jGj. In particular, if A contains both 1 and jF j, all three questions are trivial.
This yields two special cases of RASH colorings. Consider the case that A D f1g. Define an auxiliary graph H F .G/ with the same vertex set as G but with two vertices adjacent if and only if they lie in a common copy of F . For example, H P 3 .G/ is the square of G, provided G has no component of order 2. Then in a valid coloring of G, two vertices of G must have the same color if and only if they are in the same component of the auxiliary graph H F .G/. It follows that Consider the case that A D fjF jg. Then, in a valid coloring of G, two vertices of G must have different colors if they are adjacent in the auxiliary graph H F .G/; if they are not adjacent in H F .G/ then they can have the same color or different. It follows that W .G; F ; fjF jg/ is the chromatic number of H F .G/.
The following result is straight-forward. It shows that if F is connected, then we may restrict our discussion to connected graphs G. (However, if F is not connected, then the situation is more complex.) Observation 2.1. Assume F is connected but G is not. Then the existence of a valid coloring of G is equivalent to the existence of such a coloring in all its components G i . Further, the value W .G; F; A/ is the maximum of the values W .G i ; F; A/ over all components G i ; and the value W C .G; F; A/ is the sum of the values W C .G i ; F; A/ over all components G i .
It is not surprising that these parameters are often NP-hard to calculate. For example, W .G; K 2 ; f2g/ is just the ordinary chromatic number of G, while several hardness results for WORM and no-rainbow colorings were shown in [5][6][7][8]11].
Stars
In this section we focus on the case that F is a star. We begin by observing that all the possibilities for F D K 1;2 are trivial or have already been studied. Then we provide a few general results, after which we focus on F D K 1;3 and F D K 1;4 .
The star with two leaves
For P 3 , the star with two leaves, almost all the cases are covered by the above discussion or have been previously studied. The ones where the allowed set A does not contain both 1 and 3 are: Here the minimum number of colors is 1. The maximum number is 2 for K 2 , and 1 in all other connected graphs. A D f2g. This is the WORM coloring number (see for example [8]).
A D f3g. The maximum number of colors is jGj. The minimum number of colors is 1 for K 2 , but for other connected graphs it is the chromatic number of the square of G. A D f1; 2g. This is the no-Rainbow coloring (see for example [7]).
A D f2; 3g. This is equivalent to 1-defective coloring: every vertex has at most one neighbor of its color (see for example [1,3]).
The situation for other subgraphs F with 3 vertices is similar. For example, the auxiliary graph H K 3 .G/ is G minus all edges not in a triangle.
We will look at other small stars shortly, but first some general results.
Arbitrary stars
In [8] it was shown that, if a graph has a coloring where every P 3 receives two colors, then there is such a coloring that uses only two colors. That is, if it exists, W .G; P 3 ; f2g/ Ä 2. Now, this result does not generalize to most other F , such as K 3 (see [11]). But we show next that it does generalize somewhat to other stars.
Theorem 3.1. If graph G has minimum degree at least f and has a coloring where every copy of K 1;f receives 2 colors, then G has such a coloring using only two colors.
Proof. Consider a valid coloring of G. Let E M be the set of edges that are monochromatic; that is, those edges whose two ends have the same color. Note that E M does not contain a copy of K 1;f . Let H be the graph G E M . Since H has no monochromatic edge, it is properly colored. Consider two vertices u and v in H with a common neighbor w. Then since H is properly colored, neither u nor v has the same color as w. Since w has degree at least f in G, it follows that u and v have the same color. It follows that every cycle C in H has even length, since it is properly colored and every pair of vertices two apart in C have the same color. This means that H is bipartite.
It follows that we can (re)color V .H / D V .G/ so that H is properly colored and use only two colors. Now consider this 2-coloring in G. Since the new coloring is a proper coloring in H , every edge that is monochromatic in this new coloring must be in E M . But that means there is no monochromatic copy of K 1;f . That is, every copy of K 1;f receives exactly two colors.
It is unclear what happens if one drops the condition that G have minimum degree at least f .
For another general result, we consider bounds on the maximum degree of graphs that have a valid coloring. This is equivalent to asking which stars have such colorings. and this is realizable for all a and f .
Proof. Since a < f C 1, a vertex v can have at most a 1 new colors among its neighbors. Say v has c neighbors of its own color. Then, the sum of the counts of the most numerous a 2 other colors can be at most f 1 c. Thus, the total number of neighbors is at most f 1 C .f 1 c/=.a 2/. This quantity is maximized at c D 0, where it has the above value. And it is achievable by using a 1 colors on the neighbors divided as equally as possible among them.
More generally one can say the following: For the star F D K 1;f , if the allowed set A contains none of 1, 2 and f C 1, then the maximum degree in graphs G that have valid colorings is bounded.
Proof. For a crude upper bound, we argue that G cannot have a vertex v of degree .f 1/f C 1 or more. For, then either v has f neighbors of the same color (yielding a copy of K 1;f with c Ä 2) or v has f neighbors of different colors each distinct from v's color (yielding a rainbow copy of K 1;f ).
Note that in contrast, if A contains 1 or f C 1 then every graph has a valid coloring. Further, if A contains 2, then there are graphs with arbitrarily large degree that have a valid coloring, such as the complete bipartite graph K m;m .
The star with three leaves
We consider next the star K 1;3 with three leaves. There are six cases not covered by general results, or by the colorings described earlier. These are the two singletons A D f2g and A D f3g, and the four pairs A D f1; 2g, A D f1; 3g, A D f2; 4g, and A D f3; 4g.
Colorings where every K 1;3 receives 2 colors
Consider F D K 1;3 and A D f2g. This is equivalent to a coloring where every vertex of degree at least 3 sees at most one color other than itself and has at most two neighbors of its color. We consider first families of planar graphs. Several authors (e.g. [1]) showed that one can partition the vertex set of an outerplanar graph into two forests of maximum degree 2. In particular, it follows that W .G; K 1;3 ; f2g/ exists and is at most 2. But for maximal (outer)planar graphs, one can go a bit further. Proof. (a) We saw above that such a graph has a valid coloring. Since G has maximum degree at least 3, one cannot use only one color. So it remains to show that one cannot use more than two colors.
If G has order 4 then this is easily checked; so assume the order is at least 5. We know that G has minimum degree 2. Consider a vertex u of degree 2 with neighbors v and w, necessarily adjacent. Then at least one of these vertices has degree at least 4, say v. Then, by induction, every valid coloring of G u uses exactly two colors. Since vertex v has degree at least 3 in G u, it must have a neighbor y (possibly w) of different color in G u. It follows that u must have the color of v or y. That is, G has only two colors.
(b) If G is hamiltonian, then it contains a maximal outerplanar graph as a spanning subgraph, and so the result follows from part (a). So assume it is not hamiltonian. Then it has connectivity at most 3 and so there is a cuttriangle T . Let G 1 be the graph formed from G by removing the vertices inside T ; let G 2 be the graph formed by removing the vertices outside T . By induction, the valid coloring of G when restricted to G 1 uses only two colors, and similarly when restricted to G 2 . Let v be a vertex of the triangle. Then, since G 1 and G 2 both have minimum degree at least 3, the vertex v must see both colors in G 1 and both colors in G 2 . Since v can see at most two colors in total, it follows that G 1 and G 2 use the same pair of colors. That is, G has only two colors.
For another graph family, consider cubic graphs. Such graphs always have a valid coloring since they have a coloring with two colors where every vertex has at most one neighbor of its color (a 1-defective 2-coloring [12]). It might be interesting to determine the maximum number of colors in such a coloring: What is the maximum possible number of colors in a coloring of a connected cubic graph of order n where every K 1;3 receives exactly 2 colors?
Note that for regular graphs, the case of F D K 1;f and A D ff 1g also corresponds to what we called a nearinjective coloring; see [13].
Colorings where every K 1;3 receives 3 colors
Consider F D K 1;3 and A D f3g. By Lemma 3.2, the maximum degree of a graph G with a valid coloring is at most 4. So one natural family to consider is the set of 4-regular graphs. Note that a valid coloring must be proper, and every vertex must have two neighbors of one color and two neighbors of another color. It follows that if only three colors are used in total, that each color must be used an equal number of times, and in particular, that the order of G is a multiple of 3. However, there are other orders for which such a coloring exists. For example, consider the direct product of a cycle C n with 2K 1 ; that is, duplicate each vertex of the cycle so that one ends up with a 4-regular graph on 2n vertices. Then a valid coloring is achieved by using n different colors, giving every pair of similar vertices the same color.
Another family of interest is the set of cubic graphs. Computer search shows that all small cubic graphs have such a coloring. What about in general? We conjecture yes.
Conjecture 3.6. Every cubic graph has a coloring where every K 1;3 receives 3 colors.
If the cubic graph G has a matching M none of whose edges are in a triangle, then one has a valid coloring by assigning a different color to each edge in M (and so every vertex has a neighbor of its color but no other repetition). Indeed, computer search suggests that the maximum colors in a valid coloring is always at least n=2. If true, this would be a strengthening of the conjecture from [7] that every cubic graph has a coloring with at least n=2 colors without a rainbow copy of K 1;3 . In contrast, there are cubic graphs that require more than 3 colors, since each color class in such a coloring would be a dominating set, and there are infinitely many cubic graphs of order n with domination number more than n=3 (see e.g. [14]).
Other restrictions on K 1;3
In each of the remaining cases (where A is a doubleton), the coloring is guaranteed to exist since A contains either 1 or 4. But note that these situations are different to the associated singletons. For example, let F D K 1;3 . Let G be a graph that has a valid coloring for A D f3g and H be a graph that does not. Then their disjoint union has W C .G [ H; K 1;3 ; f1; 3g/ 4.
The star with four leaves
Finally in this section we briefly consider the star K 1;4 with the allowed set A a singleton.
Consider the case that A D f4g. By Lemma 3.2 above, the maximum degree is at most 4. Clearly, vertices with degree smaller than 4 pose no constraints as centers of stars. So a natural class to consider is 4-regular graphs. Let K 2;2;2 denote the 4-regular graph of order 6. Computer search shows that all 4-regular graphs through order 12 have such a coloring, except for K 2;2;2 . This raises the question: Problem 3.7. Does every connected 4-regular graph, apart from K 2;2;2 , have a coloring where every K 1;4 receives exactly 4 colors?
Consider the case that A D f3g. By Lemma 3.2 above, the maximum degree can be as high as 6. It is not true that every 6-regular graph has a valid coloring; for example through order 11 only the complete multipartite graph K 3;3;3 has one. Nor is it true that every 5-regular graph has a valid coloring; for example only 3459 of the 7848 5-regular graphs of order 12 have one. But computer search shows that all 4-regular graphs through order 12 have a valid coloring. This raises the question:
Stripes
In this section we consider colorings where F is 1-regular. We begin by studying the case where F D 2K 2 .
Colorings where every 2K 2 receives 2 colors
Let M denote the set of graphs that do not contain two disjoint edges. That is, M is the set of all stars and K 3 together with isolates.
Theorem 4.1. A graph G has a coloring where every 2K 2 receives 2 colors if and only if (i) V .G/ has a bipartition .R; S / such that R and S both induce graphs of M, or (ii) G is the disjoint union of stars and K 3 's.
Proof. First observe that such a graph has the requisite coloring. In case (i), color every vertex in R red and every vertex in S sapphire. Then since there is no monochromatic 2K 2 , every copy of 2K 2 receives both colors. In case (ii), color every component monochromatically with a different color.
Suppose now that the graph G has a valid coloring. If every edge is monochromatic, then the graph is the disjoint union of stars and K 3 's, with each component monochromatic, and we are in case (ii) of the characterization.
So, assume there is a proper edge, say rs with r red and s sapphire. Suppose there is another color present; say vertex t is taupe. Then all edges out of t must go to fr; sg. If t has degree 2, then fr; s; t g is an isolated triangle; and indeed no other edge possible. So assume that t has degree 1; say with neighbor s. By similar argument, vertex r has degree 1 too. Indeed, any vertex x that is not sapphire must have degree 1 with s as its only neighbor. Then, if we change all non-sapphire vertices to be red, we will still have a valid coloring.
That is, we may assume that the coloring uses only two colors. It follows that we are in case (i) of the characterization.
It is straight-forward to argue that one can recognize such graphs in polynomial-time and calculate the minimum and maximum number of colors used. For a crude algorithm, simply consider all possible stars and triangles and check whether what remains after their edges are removed is bipartite. If we are in case (ii) with two or more components, then the number of colors used equals the number of components. If we are in case (i), then the minimum number of colors used is at most 2, as we argued in the above proof; we can only use more than two colors if there is a vertex s with multiple leaf neighbors and the rest of the graph has a suitable structure. We omit the details.
Colorings where every 2K 2 receives 3 colors
We consider next the case that every 2K 2 has 3 colors. Like the above result, the characterization is that the graph G must be "nearly bipartite".
Define the following graphs and graph classes. Let H 1 be the set of graphs that contain two adjacent vertices x and y such that every other edge is incident to x or y. Let H 2 be the graphs that contain a triangle x; y; z such that every other vertex is a leaf with a neighbor in fx; y; zg.
Let F 1 be the graph that is obtained from the disjoint union of K 4 and K 2 by identifying a vertex of each. Let F 2 be the graph that is obtained from K 4 e by picking a vertex x of degree 3 and a vertex y of degree 2 and adding a leaf adjacent only to x and one only to y. Let F 3 be the graph that is obtained from the disjoint union of K 4 e and P 3 by identifying a minimum-degree vertex of each. Let F 4 be the graph that is obtained from P 6 W v 1 ; v 2 ; : : : ; v 6 by adding edge v 3 v 5 . We draw these in Figure 1.
For a graph G, we define the reduced version of G by considering every vertex in turn and, if it has more than one leaf neighbor, discarding all but one of these neighbors. It is immediate that a graph has a valid coloring if and only if its reduced version has, because we may assume all leaves at a vertex are the same color. Proof. These graphs each have a valid coloring. In case (i), color all vertices in one partite set red and give all remaining vertices unique colors. In case (ii), let x be the vertex that was identified. Color it and all other vertices in its partite set of H red, color its two neighbors in the K 3 sapphire, and then give all remaining vertices unique colors. In case (iii), color the subgraph H as shown in the above picture and color the subgraph M monochromatically with color 4. In case (iv), color the graphs with three colors as shown in the above picture. We turn to the proof that these are all the graphs. So assume G has a valid coloring. Proof. We claim first that there cannot be two consecutive vertices of the same color. Consider a portion of the cycle abcdef (where possibly a D f ) where c and d have the same color, say 1. Then, by considering the pair bc; de, without loss of generality b has color 2 and e has color 3. By considering the pair cd; ef , the vertex f must have a color other than 1 and 3. By considering pair bc; ef , it follows that vertex f has color 2. By similar argument, a has color 3 (and in particular a ¤ f ). But then pair ab, ef is a contradiction. Now suppose there are vertices at distance three of the same color. Consider a portion of the cycle abcdef (where possibly a D f ) where b and e have same color, say 1. Then by considering bc; de, without loss of generality c has color 2 and d has color 3. By considering pair cd; ef , vertex f must have a color from f1; 2; 3g. By the lack of consecutives of the same color, f cannot be color 1; by considering pair bc; ef , vertex f cannot be color 2. Therefore f has color 3. Similarly, vertex a has color 2. In particular a ¤ f , and so there is another vertex next to f , say g. Then by considering pair bc; fg, vertex g must have a color from f1; 2; 3g. But it can easily be checked that each choice leads to a contradiction.
So we have shown that there cannot be two consecutive vertices of the same color nor two vertices at distance three. Now consider again a portion of the cycle abcdef (where possibly a D f ). There must be a duplicate color at distance two within abcd . Say a and c have color 1, with b of color 2 and d of color 3. Then consider the pair bc; de. Since vertices b and d have different colors, it must be that vertices c and e have the same color. Then f must have a color different from c, d and e. Further, it cannot have the same color as b, by the pair ab; ef . By repeated application of this, it follows that every alternate vertex on the cycle has color 1. In particular the cycle has even length.
If the graph G is bipartite, we are done. So assume there is a triangle T . Claim 4.4. If T is properly colored, then we are in case (iii) or have reduced graph F 1 or F 2 .
Proof. Say triangle T has vertex a of color 1, vertex b of color 2, and vertex c of color 3. Suppose there is another color present, say vertex d has 4. Then consider an edge incident with d . If it goes to triangle T , we have a contradiction. So say it is de. By considering the pairs formed by de and each edge of T , it follows that e cannot have a new color, nor can it have color from f1; 2; 3g. Thus e has color 4. That is, the edges incident with the vertices not colored from f1; 2; 3g are monochromatically colored; and indeed must induce a component from M colored 4.
So consider the vertices with color from f1; 2; 3g. We cannot have an edge disjoint from the triangle, since if it is monochromatic one can pair it with an edge of T that includes that color, and if proper one can pair it with the edge of T that is identically colored. So all such edges have (at least) one end in the triangle. If the component containing T is in H 1 , then we are done. So assume the component containing T is not in H 1 . That is, every vertex of T has a neighbor outside the triangle. Consider the possibilities.
One possibility is that the three vertices of T have a common neighbor v. By the above, v must have one of their colors, say 1. If not just K 4 , there is another vertex, say w. This vertex cannot be incident with either a or v, so say incident with vertex c. Then w must be color 2 and must be a leaf. One can check that all remaining edges must be similarly incident with c. Further, the edge va being monochromatic implies that there is no monochromatic edge of color 4. That is, the reduced version of G is F 1 or K 4 .
A second possibility is that a; b have a common neighbor v while c has neighbor w. Suppose that w has color 3. Then considering the pair va; cw it follows that v must be color 2, but then the pair vb; cw is not validly colored. Thus vertex w has color 1, say. By considering the pair va; cw, it follows that the vertex v has color 2. Any other neighbor x of a must have color 2 because of ax; cw but not color 2 because of vb; ax, so a has no other neighbor. Any other neighbor y of b must have color 3 because of the pair av; by. And, any other neighbor z of c must have color 1 because of the pair bv; cz. Further, the edge vb being monochromatic means that there is no monochromatic edge of color 4. That is, the reduced version of G is F 2 or F 2 with one/both of the leaves deleted.
A third possibility is that a, b, and c have only distinct neighbors outside the triangle. That is, the component containing T is in H 2 .
So we can assume that there is no properly colored triangle T . So assume second there is an edge ds disjoint from T . Since one can pair it with ab, edge ds must be properly colored and neither end is color 1. Since one can pair it with ac, one end must have color 2. Further, no isolated vertex w of G T can have color 1 or 2, since V .T / [ fwg contains a 2K 2 . In particular, G T is bipartite with bipartition .D; S /, where d and every other vertex in D has color 2, while s and every other vertex in S has color distinct from f1; 2g.
Note that none of fa; b; cg can have a neighbor in D. Therefore, if neither a nor b has a neighbor in S , then we are in case (ii). So assume one of them, say a, has a neighbor e in S, say of color 3. Note that possibly e D s.
Then consider any vertex v other than fa; b; c; d; eg. Suppose it has a color not in f1; 2; 3g. By potential pairing with ae or ac, it follows that all neighbors of v have color 1 (so in particular v ¤ s) and then there is a contradiction from pairing with ds. That is, all vertices other than a; b have colors from f2; 3g.
It follows that the nontrivial component in G fa; bg is a star. Now, note that if both b and c have degree 2, we are in case (ii) of the theorem. So either b is adjacent to d , or c is adjacent to s. In the first case we get the reduced graph F 3 or possibly with edge removed so that it is just K 4 e [ K 2 (in which case G is covered by case (iii)). In the second case we get F 4 . And one can check that all remaining vertices are clones of existing leaves.
Finally suppose that all triangles are monochromatic. Then there can be only one. Indeed it must be an isolated component, while the remainder of the graph is bipartite, so that we are in case (iii).
This completes the proof of Theorem.
General stars
Consider colorings where every copy of f K 2 has a colors.
We start with the case that a D 2. Theorem 4.1 showed that if G is connected and a valid coloring exists, then W .G; 2K 2 ; f2g/ Ä 2. (We do need the connectivity, since the disjoint union of copies of K 3 has a valid coloring but each triangle must have a different color.) The analogue of Theorem 4.1 for more edges turns out to be simpler. Theorem 4.6. Let f > 2. A graph G has a coloring where every f K 2 receives two colors if and only if G has a bipartition .R; S/ such that both R and S induce subgraphs with matching number less than f .
Proof. If G has such a bipartition then that coloring is a valid coloring. So, consider a graph G that has a valid coloring. In particular, consider a valid coloring of G that uses the fewest total number of colors, and suppose that total is more than two.
Then G has vertices of three colors, say colors 1, 2, and 3. Consider recoloring every vertex of color 2 with color 1. This cannot increase the number of colors in any copy of f K 2 , but by minimality this coloring is invalid. That means that in G there must be a copy F 12 of f K 2 with colors 1 and 2.
Consider a vertex v of color 3 with neighbor w say. If w is disjoint from F 12 , then we can take vw, an edge of F 12 containing color 1 and an edge of F 12 containing color 2 and f 3 more edges of F 12 , and so obtain a bad fK 2 . So w must be in F 12 . In particular, there is no edge where both ends are color 3.
By similar logic, there is no monochromatic edge of any color. But that means every edge of F 12 is properly colored. And so vw and f 1 disjoint edges of F 12 produces a bad f K 2 , a contradiction. Thus, G has a valid coloring using only two colors.
We end with some comments about the general case. Proof. The colorings are as follows. For a D 1 give all vertices the same color; for a D 2 properly color each edge with red and sapphire; for a D f color all edges monochromatically but with different colors; for a D f C 1 color all edges properly with one end red and the other end a unique color; and for a D 2f give all vertices different colors.
Now consider a coloring of mK 2 for m large. One can obtain arbitrarily large collection of edges such that are either all edges are monochromatic or all are properly colored. In the former case we can further find large collection of edges that either are all the same color or are all different colors. Thus for the coloring to be valid we need a 2 f1; f g.
So assume all the edges in the collection are properly colored. Again we can find a large collection such that either all have the same pattern or all have different patterns. In the former case it follows that a D 2. So assume the latter case. We can then find a large collection where every edge e i has (at least one) end of color i . For the other end, we can again assume that they are all the same color or all different colors. In the former case we need a D f C 1.
In the latter case we can again prune so that each color appears on only one edge, and so a D 2f .
We saw in the proof of Theorem 4.2 that the 5-cycle does not have a coloring where every 2K 2 receives three colors. This can be generalized. Lemma 4.8. Consider colorings where every f K 2 has a colors. One can color all odd cycles if and only if a 2 f1; 2; 2f g. One can color all even cycles if and only if a 2 f1; 2; f C 1; 2f g.
Proof. The result is clear for a D 1 and a D 2f (color all vertices the same or all different). To do a D 2 (assuming f > 1), color every alternate vertices with red and sapphire except that possibly one pair of adjacent vertices receive the same color. To do a D f (assuming f > 2) in a large cycle, every edge must have a different monochromatic coloring, which is impossible. To do a D f C 1 (assuming f > 1) in a large cycle, every edge must have the same color pattern but be properly colored. So the cycle length must be even.
For example, for F D 3K 2 the above lemma shows that the cycle length is bounded for colorings where a 2 f3; 5g. If every 3K 2 receives three colors, then the longest cycle colorable is the 10-cycle: color red a vertex v and the two vertices at distance two from it, color sapphire the vertex v 0 opposite v and the two vertices at distance two from it, and color the remaining vertices taupe. If every 3K 2 receives five colors, then the only cycle of length more than 6 that can be colored is the 8-cycle: color red a vertex u and the vertex u 0 opposite u, color sapphire the two vertices adjacent to neither u nor u 0 , and give the remaining vertices unique colors. It can also be shown that the cycle length is bounded even if A D f3; 5g. | 8,745.8 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
Experimental Investigation of Rotor-Stator Interaction in Axial-Flow Turbines and Compressors *
Detailed results of unsteady flow measurements in a stator-rotor-stator assembly of an axial-flow turbine as well as an inlet guide vane-rotor-stator formation of an axial-flow compressor are presented in this paper.
INTRODUCTION
Most of the present analysis and design methods in turbomachinery are based on steady aerodynamics, although it is well known that the unsteady flow associated with blade row interaction has a major influence on the flow field, boundary layers, turbu- lence intensities, flow separation, blade vibration, noise, and heat transfer.The demand for higher performance in aircraft as well as heavy duty gas turbine design results in a close axial spacing of the blade rows, lower aspect ratios and a reduced This paper was originally presented at ISROMAC-6.Fax: 0241-8888-229.number of blades.This requires more sophisticated design methods, including the possibility to cali- brate numerical methods and their turbulence modelling.There is a strong need to gain a better understanding of the three-dimensional unsteady flow in a blade row including the upstream and downstream influence of rotor-stator interactions.Gallus et al. (1982) investigated the influence of blade number ratio and blade row spacing on axial- flow compressor stator blade dynamic load and stagesound pressure level.By variation ofthe param- eters blade number ratio and blade row spacing in H.E. GALLUS an annular wind-tunnel they found experimentally as well as by numerical approaches the results shown in Figs. 1-3.Since the measurements and calculations were done for the midspan section these results are valid only for the flow at midspan. Figure shows the fluctuation of the local pressure coefficients compared with the local pneu- matically measured mean values.The results reveal a much stronger increase of the dynamic pressure fluctuations with decreasing rotor-stator blade number ratio than with decreasing axial spacing Axis.Nevertheless, low blade number ratio Z./Zst and low axial spacing results in highest dynamic blade loads.
Figure 2 represents a comparison ofthe potentialtheoretic and the wake interactions as a function of the axial distance between the blade rows.It is obvious that the fluctuation amplitudes (plotted for the first harmonic) of the potential-theoretic inter- action (computed after Lienhart, 1973) more rapidly with increasing axial distance than the wake interaction amplitudes calculated according to Henderson (1972).
Figure 3 is also taken from Gallus et al. (1982) and compares computed and measured "stimuli" (representing dynamic blade forces to the static ones) as a function of the axial distance between the blade rows.The computed curves were gained by Lienhart's prediction method for the potentialtheoretic interaction alone.They show that the dynamic forces due to upstream interaction are higher than those due to downstream interaction.The experimental values measured only in the downstream blade row contain also the dynamic x lmml /,8 rotor stalor FIGURE 3 Computed and measured "stimuli" as a function of the axial distance between the blade rows.
H.E. GALLUS forces due to wake interaction and are much higher than the potential-theoretic results which is in good agreement with the information in Fig. 2.
For the design of bladings in axial-flow turbo- machines these blade row interaction effects have to be considered.That means unsteady flow effects due to rotor-stator interactions do change the real flow compared with a pure steady design.For example, due to the fact that in the wakes of an axial-flow compressor rotor the exit flow angle fluc- tuates strongly in that way that the time-averaged incidence angle to the following stator is higher than that of the steady design for the core flow between the blades.That means, the wake effects in compressor blade rows result in a higher turning of the following blade row.It can easily be shown that the wakes in turbines reduce the time-averaged turning of the downstream blade row.
The above discussion of the rotor-stator inter- action concerned at first the time-averaged loading of the blade rows.On the other hand, the profile pressure fluctuation due to rotor-stator interac- tion strongly influences the behaviour of profile as well as side wall boundary layers.The periodically fluctuating boundary layer behaviour leads do different positions of the onset and length of transition as well as boundary layer separation.These effects take strong influence on the losses and the efficiency, respectively.Investigations in annular cascades and real stages showed, that the flow through the blade passages and on the blade suction side surfaces is highly three-dimensional.That means midspan investigations are no longer representative for the blade row behaviour in a machine.That is why the investigations reported on in this paper were carried out measuring the qow field over the full span.
EXPERIMENTAL INVESTIGATION OF THE THREE-DIMENSIoNAL FLOW IN AN ANNULAR COMPRESSOR CASCADE
The annular compressor stator cascade mentioned in the introduction was investigated with respect to the three-dimensional flow field at various incidence angles by Schulz and Gallus (1988).The inlet swirl was provided by variable inlet guide vanes far upstream of the stator cascade.Major objectives were the studies of corner stall and losses with increasing incidence angle.In order to check the influence of rotor-stator interaction on the 3-D flow field in the annular compressor cascade, Schulz et al. (1990) simulated the unsteady effects of a bladed rotor by using cylindrical spokes instead of blades.The swirl was further on provided by the variable inlet guide vanes in front.Figure 4 demon- strates the different boundary layer and corner stall behaviour on the blade suction surface with increasing incidence.The left column shows the oil-flow visualization on the blade suction side without rotor and in the right column with rotor.It becomes obvious, that due to the rotor-stator interaction the extent of the corner stall region has been extremely reduced.Figure 5 confirms these facts by a photo taken from the flow visualization on the hub surface.By total pressure measurements downstream of the annular cascade the authors found remarkable results about the loss behaviour.
Figure 6 shows the loss distribution with increasing incidence.The left hand side demonstrates the circumferentially averaged losses at midspan, where the profile losses are dominating and side wall flow influences are comparatively small.In this case, the midspan losses are increased by rotor-stator interaction due to an earlier profile boundary layer transition.The right hand side of Fig. 6 shows the overall losses averaged circumfer- entially and from hub to tip as a function of incidence angle.In the whole incidence range the overall losses are lower due to rotor-stator interaction causing a higher turbulence level and stronger energy transport from the core flow to the low energy side wall flow.
It should be mentioned here, that these results of rotor-stator interaction on loss behaviour were achieved for an aspect ratio of only 0.86 explaining for the fact that the reduction of the overall losses by rotor-stator interaction is due to the strong loss reduction near the sidewalls that overcompensates the increase of the profile losses along a small blade height.
These measurements have been supported by further studies on the 3-D boundary layer behaviour and flow field.Due to the complete documentation of data this test case served various authors as validation of their computation codes (Gallus et al., 1990; Lticke et al., 1995; Melake, 1995).
ROTOR-STATOR INTERACTION IN A HIGH SPEED AXIAL-FLOW COMPRESSOR
The research machine is a single-stage axial-flow compressor with an inlet guide vane row as shown in Fig. 7.The main design parameters of the compressor are listed in Table I.
The objective ofthe project was to study unsteady flow effects in a high speed compressor.Therefore, the compressor was equipped with various unsteady flow measuring techniques.
The three-dimensional flow field at rotor inlet and outlet was determined with 3-D hot-wire probes.For the unsteady total pressure field down- stream of the rotor a probe with a single Kulite type (XB-X-062) semiconductor pressure trans- ducer mounted under a pneumatic tube was used.These probe measurements were taken at midspac- ing of IGV and stator,respectively.The unsteady pressure at the casing above the rotor-blade tips was detected by a single Kulite (XCQ-062) pressure transducer at 21 axial positions spaced 3.5 mm.To investigate the unsteady transition process of the periodically disturbed profile boundary layers on the suction sides of IGV and stator blades glue-on hot films were used.The IGV and stator blades were in addition provided with semiconductor transducers to measure the unsteady blade pressure distribution.Detailed information on data acquisition and data reduction can be taken from Gallus et al. (1995).
In the following essential results from these inves- tigations with respect to rotor-stator interaction shall be presented.To start with, the tip clearance measurements revealed strong tip clearance flow as is shown in Figs. 8 and 9. Figure 8 represents the pressure distribution above the rotor (ensemble FIGURE 7 Cross-section of the research compressor.averaged data) whereas Fig. 9 shows the RMS data.Due to the strong interaction of tip leakage flow and main flow in the tip region the pressure minimum (see Fig. 8) is no longer at the blade suction side surface and the leakage flow vortex is strongly directed to the pressure side of the neighbouring blade.The concentration of high RMS data indicates the location of very intensive interaction of leakage flow and main flow.The strength of the tip leakage vortex can also be detected by hot-wire measurements of the secondary flow distribution downstream of the rotor exit as is shown in Fig. 10.
H.E. GALLUS
A very interesting information can be derived from the unsteady pressure distribution measure- ment at midspan on the suction sides of IGV and stator shown in Fig. 11.The unsteady pressure distributions are plotted in the same scale.Thus, it .":: -,0 FIGURE 11 Unsteady pressure distribution at midspan on suction side of IGV and stator.
is obvious that the potential-theoretic upstream interaction of the rotor on the IGV vanes is much stronger than the downstream interaction on the stator blades comosed of potential-theoretic and wake interaction.Similar effects have been men- tioned already in the introduction of this paper.Along with the before described pressure fluc- tuation on the blade surfaces of IGV and stator the study of the boundary layer behaviour is most important with respect to the effect of unsteadiness on the position and axial extent of boundary layer transition as well as its influence on loss development.Figures 12 and 13 allow to compare the boundary layer development at midspan on the suction sides of IGV and stator blades, respec- tively.Comparing first the ensemble averaged data in the left columns of both figures we recognize again that also the glue-on hot-wire films show much stronger values of the real time data upstream of the rotor on the IGV surface than downstream on the stator.The frequency spectra are plotted in the middle column and show in agreement with the ensemble averaged data quite another boundary layer behaviour for the IGV than for the stator.Figure 12 shows an increase of the amplitudes of the ensemble averaged data as well as frequency spectra up to sensor 6 immedi- ately followed by a relative minimum at sensor 7 and a relative maximum shortly downstream at sensor 8. Figures 12 and 13 describe from the left to the right the periodic ensemble average signals, the affiliated frequency spectra and the time averaged values.Here, the normalized dc-voltages and the time-averaged random and periodic fluctuations are plotted over the associated steady pressure dis- tribution.For a general impression of the transi- tion process in the boundary layer the plots have to be viewed side by side.
During the acceleration the dc-voltages go down continuously until sensor 7 lying slightly behind the suction side pressure minimum.In the front part the boundary layer is laminar.Before the pressure minimum is reached the random fluctua- tions start to increase rapidly to their maximum at about 70% chord (sensor 8) which can also be seen in the frequency spectra.The boundary layer decelerates, grows instable and thus more sensitive to fluctuations in the core flow.Parallel herewith the distribution of the time-averaged periodic fluctuations got. a minimum at sensor 7 between two maxima.Referring to Schr6der (1989) and Hourmouziadis et al. (1986) this is characteristic for a transition via a separation bubble.The bubble oscillates with blade passing frequency and its extension depends on it (Dong and Cumpsty, 1989).
Both the beginning and the end of transition differs in time with rotor-stator position.The whole transition takes place over ca.30% chord.Not until sensor 9 the boundary layer is completely turbulent.
The profile boundary layer of the stator is influenced by the rotor wakes (Fig. 13).The ampli- tudes are smaller in comparison with the IGV.The ensemble-average-signal of sensor 3 indicates that there the flow is already affected by the transition although the normalized dc-voltages reach their minimum behind the measured steady pressure minimum at sensor 4. A similar second maximum was observed by Dong and Cumpsty (1989).It is caused by the faster travelling wakes in the free stream.Sensor 4, where in contrast to the IGV both maxima of random and periodic fluctuations are situated, is lying in the transition zone all the time.
A so-called "bypass"-transition takes place.The whole transition is stretched until sensor 6.This is more than 30% chord.
STATOR-ROTOR-STATOR INTERACTION IN AN AXIAL-FLOW TURBINE AND ITS INFLUENCE ON LOSS MECHANISMS
Detailed measurements have been performed in a subsonic, axial flow turbine stage to investigate the structure of the secondary flow field and the loss generation (Zeschky and Gallus, 1993).The data include the static pressure distribution on rotor blade passage surfaces and radial-circumferential measurements of the rotor exit flow field using three-dimensional hot-wire and pneumatic probes.
The flow field at the rotor outlet is derived from unsteady hot-wire measurements with high tem- poral and spacial resolution.The above-mentioned paper presents the formation of the tip clearance vortex and passage vortices which are strongly influenced by the spanwise nonuniform stator outlet flow.The experimental results of the un- steady flow velocity and turbulence measurements demonstrate the influence of the periodic stator wakes onto the rotor flow.
Figure 14 shows the cross flow patterns inside the rotor blade passages.The arrows represent the difference between the local velocity vector and the relative velocity field in the rotor.According to the reduced frequency two stator wakes are simul- taneously present in each rotor passage.Due to the cut stator wakes and secondary vortices from the stator the turbulence level is increased.Similar results were obtained by Binder (1985).The higher acceleration at the blade suction sides causes a distortion of the stator wakes.Nevertheless, cross flow components are still observed close to the trailing edge and downstream of the rotor.
The relative flow angles and turbulence inten- sities inside the rotor passage in snap shots for four rotor-stator positions are shown in Fig. 15.The distributions show the angle and turbulence condi- tions over the blade height at an axial position of x/bp--13.9%.At rotor position 2 higher flow angles occur in the corner between pressure side and casing representing a cross flow towards the suction side.The reasons for this are the lower velocity and the underturning in the high loss area of the stator exit flow.Turbulence spots associated with the stator loss cores appear.At position 4 the stator wake is visible over the entire span with strong distortions at h/H= 15 and h/H= 85%.Due H.E. GALLUS The measurements show that the rotor secondary flow is mainly caused by the radial nonuniform stator exit flow and the periodically unsteady transport of the stator wakes through the rotor passages.The endwall boundary layer at the rotor inlet are very thin which is a result of the transport mechanisms and the acceleration in the stator.
Therefore, the highest overturning d3es not occur close to the side walls but at 15% and 85% span.
CONCLUSIONS
The measurements of the unsteady three-dimen- sional flow fields in compressors and turbines demonstrated quite different behaviour of the secondary flow and loss production.Inside the turbine stator strong passage vortices are generated by the endwall boundary layers.The low momentum fluid of the boundary layers is accumulated inside these two vortices.Therefore, the endwall boundary layers at the rotor inlet are very thin.They con- tribute only in a small scale to the rotor secondary flow.
The secondary flow field in a compressor blade row is much more complicated.The endwall boundary layers do not have the major and domi- nant influence as it was observed in the turbine tests, since the pressure gradient from blade to blade is lower.The intensity of the secondary flows due to leakage, profile boundary layer centrifuga- tion, nonuniform blade circulation, is of the same magnitude.On the other hand flow separations like corner stall and rotor blade tip stall are present.Furthermore, the wake decay takes place in an adverse pressure field so that higher turbulence levels are generated inside the wake.
The measuring results demonstrated that the total pressure losses of compressor cascades with low aspect ratios (< 1.0) decreased in unsteady flow compared to those obtained in an undisturbed flow field (without rotor).This tendency has been observed at various incidence angles.Although the profile losses at midspan increase due to the earlier onset of boundary layer transition in the case with rotor, the overall losses decrease.This can be related to the significant loss reduction due to a smaller hub corner stall region.Intensive turbulent mixing in the wakes is believed to diminish the hub corner stall.
Comparing the influence of the rotor-stator interaction in compressor and turbine flow com- pletely different mechanisms of loss re-distribution can be observed.In the turbine rotor a distinct shift of the loss cores occurring downstream of the stator towards midspan can be observed.There- fore, the dominant mixing effect in the turbine stage is caused by the intensive secondary flow.
E EN NE ER RG GY Y M MA AT TE ER RI IA AL LS S Materials Science & Engineering for Energy Systems
Economic and environmental factors are creating ever greater pressures for the efficient generation, transmission and use of energy.Materials developments are crucial to progress in all these areas: to innovation in design; to extending lifetime and maintenance intervals; and to successful operation in more demanding environments.Drawing together the broad community with interests in these areas, Energy Materials addresses materials needs in future energy generation, transmission, utilisation, conservation and storage.The journal covers thermal generation and gas turbines; renewable power (wind, wave, tidal, hydro, solar and geothermal); fuel cells (low and high temperature); materials issues relevant to biomass and biotechnology; nuclear power generation (fission and fusion); hydrogen generation and storage in the context of the 'hydrogen economy'; and the transmission and storage of the energy produced.As well as publishing high-quality peer-reviewed research, Energy Materials promotes discussion of issues common to all sectors, through commissioned reviews and commentaries.The journal includes coverage of energy economics and policy, and broader social issues, since the political and legislative context influence research and investment decisions.
FIGURE 9
FIGURE 9 Unsteady pressure distribution above the rotor (rms data).
FIGURE 13
FIGURE12 Boundary layer development at midspan on suction side of IGV.
FIGURE 14 Secondary flow and turbulence intensity in the turbine rotor (at 72.7% span).
S
SU UB BS SC CR RI IP PT TI IO ON N I IN NF FO OR RM MA AT TI IO ON N Volume 1 (2006), 4 issues per year Print ISSN: 1748-9237 Online ISSN: 1748-9245 Individual rate: £76.00/US$141.00Institutional rate: £235.00/US$435.00Online-only institutional rate: £199.00/US$367.00For special IOM 3 member rates please email s su ub bs sc cr ri ip pt ti io on ns s@ @m ma an ne ey y. .cco o. .uuk k E ED DI IT TO OR RS S D Dr r F Fu uj ji io o A Ab be e NIMS, Japan D Dr r J Jo oh hn n H Ha al ld d, IPL-MPT, Technical University of Denmark, Denmark D Dr r R R V Vi is sw wa an na at th ha an n, EPRI, USA F Fo or r f fu ur rt th he er r i in nf fo or rm ma at ti io on n p pl le ea as se e c co on nt ta ac ct t: : Maney Publishing UK Tel: +44 (0)113 249 7481 Fax: +44 (0)113 248 6983 Email<EMAIL_ADDRESS>or Maney Publishing North America Tel (toll free): 866 297 5154 Fax: 617 354 6875 Email<EMAIL_ADDRESS>further information or to subscribe online please visit w ww ww w. .mma an ne ey y. .cco o. .uuk k C CA AL LL L F FO OR R P PA AP PE ER RS S Contributions to the journal should be submitted online at http://ema.edmgr.comTo view the Notes for Contributors please visit: www.maney.co.uk/journals/notes/emaUpon publication in 2006, this journal will be available via the Ingenta Connect journals service.To view free sample content online visit: w ww ww w. .i in ng ge en nt ta ac co on nn ne ec ct t. .cco om m/ /c co on nt te en nt t/ /m ma an ne ey y
TABLE Design
Unsteady pressure distribution above the rotor (ensemble-average data). | 4,828.6 | 1998-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Design and Analysis of Enterprise Management System Framework Based on Blockchain Technology
With the advent of the data age and the rapid development of Internet of things technology, blockchain technology has been proposed. Since the advent of blockchain, it has been widely concerned. Its tamper proof, traceability, anonymity and openness have attracted more and more people’s interest. Many researchers and practitioners have stepped into this field. In this paper, the related technical concepts of blockchain are described, and the blockchain technology is integrated into enterprise management innovation to provide new ideas for enterprise management. This paper designs an enterprise management system based on blockchain technology according to the requirements, and carries out bug monitoring and pressure test on the system, and the results show that the system can be used well.
Introduction
With the development of Internet technology, big data has been paid more and more attention. Wealth is not just money. Data is becoming a new wealth of the times. Enterprises can fully reflect the huge wealth behind it by analyzing and processing relevant data. In order to develop the value behind the data, it is necessary to share the data among national administrative organs and enterprises in the whole industry to achieve win-win results. Mining the value behind data and sharing wealth with others has become a hot topic in the contemporary era. The original common method of sharing data resources is to store and provide for use through centralized processing of the third generation. Users themselves can not directly access data for use, and there is a certain risk of data leakage. In enterprises, data leakage means confidential disclosure. Therefore, in view of the serious problem of data leakage, scholars and experts began to study the direction of data storage security, hoping to meet the basic needs of data security on the basis of data exchange and sharing. With the emergence of blockchain technology in the 21st century, the technology has the characteristics of decentralization and trusted accounting. These characteristics meet the requirements of data sharing and security protection, and provide new solutions for relevant experts and scholars in the direction of data security [1].
In the second half of the 20th century, with the development of Internet infrastructure and software and hardware, the efficiency of human information exchange has been explosively improved, which has brought about the rapid development of various industries. Human beings have entered the information age [2]. Blockchain technology is actually a collection of distributed storage, P2P network and secret algorithm. Bitcoin is based on relevant technologies of blockchain [3]. With the advent of intelligent technology, due to the dispersity and unforgeability of blockchain, blockchain is considered to be able to establish information in various actual business scenarios and improve transparency, reliability and security. The combination of blockchain and cloud computing, artificial intelligence, Internet of things and other fields will also bring great development opportunities. One day in the 21st century, a group of people put forward the concept of bitcoin, and pointed out that blockchain technology is the basic technology of building bitcoin system. With the popularity of Internet, enterprise management has broken through the traditional single form, and network management has been promoted. However, these new forms of management are facing a serious crisis of trust [2]. In addition, in the field of traditional enterprise management, there are still some problems, such as incomplete and opaque process records. The employer can not verify the authenticity of the information of job seekers when recruiting personnel, which brings a lot of trouble to the recruitment work of employers. Therefore, the storage technology to meet the needs of distributed is particularly urgent, it can manage and store all the management records and information data of the enterprise, at the same time, it will not be modified and damaged. This technology can meet various forms of recording functions, open to all network terminals and meet the basic needs of security, and blockchain technology can meet these requirements. The discovery and use of blockchain technology can not only bring irreplaceable convenience to enterprises and social information sharing, but also pave the way for the development of centralized information processing [3]. At the same time, it also provides recognition opportunities for these industry certification, and blockchain technology is widely accepted by enterprises.
In view of the above-mentioned enterprise management problems and the advantages of blockchain. This paper will elaborate the concept of blockchain related technology, analyze the relevant algorithms, design and develop an enterprise management system based on blockchain technology, test and run it, and analyze its practical operation ability in enterprise management.
Blockchain
Distributed shared digital integration is always suitable for the characteristics of blockchain. The technology is supported by cryptology and stored centrally in chronological order. Because of the many characteristics of blockchain, which makes it unable to change the information at will and the traceability of transactions, more and more enterprises, companies and institutions begin to focus on the research of this technology. Blockchain is based on P2P network. Each node in the network maintains a public ledger. The public ledger is a chain in which all data blocks (blocks) are connected in chronological order [4]. The public ledger ensures that all data is open and transparent. Undeniable? Nodes reach a consensus on the transaction through consensus algorithm, record the transaction in the block, and ensure that the transaction will not be tampered with by cryptography. Bitcoin based on blockchain also proves that the main reason why blockchain is widely concerned is that it has the following characteristics: Centralization: the whole blockchain network is jointly maintained by all participants, and there is no third-party organization. Through distributed storage and P2P network mechanism for information transmission and verification. Any node problems will not affect the operation of the entire network.
De Trust: Based on mutual trust and value-based encryption algorithm, without the need for both sides to exchange data. Interworking and sharing: all users can view the data on the blockchain to ensure data sharing on the chain and promote data exchange between nodes.
Trusted database: each node has a complete data account. Unless most nodes can be controlled, the data loss or modification of any node is not enough to affect the data of the whole blockchain network [5].
Traceability: the blockchain adopts chain data structure, and time stamp is marked on the block to make the data traceable [6].
Ether
Ethereum is the cryptocurrency of Ethereum. Users can acquire ether by mining, buying from the market or other users. Token is a cryptocurrency, which is implemented in the form of smart contract and runs on Ethereum [7]. The development of token contracts should follow a standard (e.g., ERC﹤ 20 [52]) so that the front end (e.g., wallet) can identify token activities (e.g., token transfers) [8]. The token contract maintains a mapping table, and each entry records a token holder (that is, an account) and the token balance that belongs to him. Unlike ether transactions, token holders transfer their tokens to another token by calling specific functions implemented in the contract. If some tokens are successfully transferred, the mapping table is updated accordingly. Token contracts should issue event notifications to inform other applications of token changes, such as wallets, trading markets, etc. Any application can know the execution result of token contract by listening for the event sent out. In addition to standard functions and standard events, the token standard also allows developers to implement nonstandard functions and nonstandard events. The ERC20 standard defines six standard function interfaces and two standard events. For example, the declaration of the event transfer is "event transfer (address indexed from,address indexed to,uint256 Value "means the address From will Value token transferred to address to.In addition, whenever the ERC or the transition function requires a non-standard token to be issued, the event should be notified.
Cryptography Technology
Cryptography is a technical science dedicated to the study of how to compile and decrypt passwords, which involves information security related issues, such as authentication and data encryption and decryption [9].
Management Information System
(1) Management information system (MIS) is an organization method that can provide the expected information of past, present and future. It provides standardized information where appropriate to support the planning, control and business functions of the organization to assist in the decision-making process.
(2) The integration of collection, related processing, preservation and publication functions for decision support, coordination and control. It can help internal members to analyze related topics, visualize complex problems, and create new products.
(3) The system has always carried out the basic concept of people-oriented. It uses computer hardware, software, network communication equipment and other office equipment to process the required information, including collection, storage, calculation and processing, protection, etc., to organize strategic competition and improve efficiency. A complete man-machine system can support high-level decision-making, middle-level control and grass-roots operation.
Characteristics of Management Information System
(1) Provide decision support management. The system can provide relevant information for managers and senior managers of enterprises, so that managers can analyze the information needed to make decisions from the data, provide relevant decision suggestions for managers, and serve for managers to make decisions. (2) Omnidirectional characteristics. From a certain point of view, the system is an integrated system platform for comprehensive management and analysis of an enterprise, organization and other institutions. In establishing and creating the system, enterprises can distribute the management and information of all levels to each subprogram in the system, and then integrate the information through the subsystem, so as to achieve the purpose of comprehensive management of all classes of enterprises and institutions, and assist the senior management.
(3) The combination of man and machine. The ultimate purpose and result of the management system are to assist users or enterprise executives to make corresponding decisions, and the final decisions are not issued by the system, but by the users. Therefore, the use and management of the system must be the combination of human and system. In the management system, all levels of managers and users are not only a user of this system, but also the manager of users. Therefore, in the process of developing this system, it is necessary to correctly distinguish the status distribution and related functions of man-machine in the system, so that both the system and individual users can play their own advantages and improve the performance and management efficiency of the system [10].
The Following Problems Exist in the Internal Management of Enterprises Under the Environment of ERP System
(1) ERP system equipment security can not be guaranteed, which may be affected by natural disasters, equipment failure or loss and other factors; (2) Internal personnel can tamper with or delete part of the data for their own interests; (3) Information input error will affect the follow-up business of the whole system; (4) Once ERP system is infected with virus, it may cause data leakage in the process of information transmission; (5) ERP system is complex and needs a large number of compound talents, but there is no complete incentive mechanism.
Selection of Test Objects
In this paper, according to the social research, technical negotiations, and reached a practical agreement with an enterprise in our city, we try to run the enterprise management innovation system based on blockchain technology in the reform enterprise, test the reliability and practicability of the system, and conduct practical research in the enterprise, and analyze the popularity of the system.
Experimental
Steps According to the system stress test, bug number monitoring and other aspects, test the operation of blockchain in the enterprise management system. The specific test results are shown below.
Figure 1. Bug Distribution
In the functional test, 97 tests were designed, 95 were effective, and 185 bugs were found. Among them, 97.1% were mild and general bugs, and there were no fatal bugs. All bugs have been fixed and closed. In the data accuracy test, 50 test cases were designed, 50 were effective cases, and 43 bugs were found. Among them, 100% were moderate and general bugs, and there were no fatal bugs. All bugs have been fixed and closed. The function of the system meets the requirements of users. Sed on the different pressure of the design system, the compression performance of the platform is tested. According to the data in Table 1, the processing results of the platform system for concurrent login of users meet the test requirements, and there is no platform crash or obvious performance defects due to a large number of concurrent logins.
Figure 2. Views of Employees of Different Classes on System Introduction
According to figure 2, the picture is changed to the views of the 100 employees of all classes selected by the enterprise on the introduction of the system. More than 90% of the middle and low-level employees in the enterprise welcome the system, and four senior managers say they are not optimistic about the system.
Conclusion
This paper analyzes and designs an enterprise management system based on blockchain technology through the related elaboration of blockchain technology and the related technology of enterprise management. The system combines the advantages of traditional management system and blockchain technology, such as decentralization, to give full play to the advantages of personal and computer, meet the needs of enterprise management, and provide the future enterprise management with the | 3,106.6 | 2021-04-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
A Comprehensive Review of Evolutionary Algorithms for Multiprocessor DAG Scheduling
: The multiprocessor task scheduling problem has received considerable attention over the last three decades. In this context, a wide range of studies focuses on the design of evolutionary algorithms. These papers deal with many topics, such as task characteristics, environmental heterogeneity, and optimization criteria. To classify the academic production in this research field, we present here a systematic literature review for the directed acyclic graph (DAG) scheduling, that is, when tasks are modeled through a directed acyclic graph. Based on the survey of 56 works, we provide a panorama about the last 30 years of research in this field. From the analyzes of the selected studies, we found a diversity of application domains and mapped their main contributions.
An multiprocessor task scheduling problem (MTSP) instance, adapted from Reference [3], with eight tasks and two processors. In this figure, cc represents the communication cost; for instance, t 6 waits two seconds to start its execution because task t 3 is sending a message throw network.
Similar to many optimization problems in the real world, the search for the optimal MTSP solution has a high computational complexity [1,2] and cannot be solved promptly by brute force techniques. One of the most investigated approaches to this type of problem is the use of techniques that, in many cases, discover a set of potential solutions, from which one can find the global optimum or an approximate solution to that problem. In this context, evolutionary algorithms (EAs) have been investigated as a search method due to their flexibility to solve complex optimization problems.
EAs [4] can be adapted to various types of problems. Inspired by the evolutionary paradigm, these techniques work with the evolution of several solutions (population) simultaneously, ensuring a significant advantage when compared to other techniques. There are several studies involving EAs applied to the scheduling problem, and this number increases every year. For this reason, it is necessary to systematically organize these works in order to summarize their main characteristics, understand contributions, authors, and institutions that maintain relevant publications. With such information, it is possible to identify domains that may receive attention to additional research in the future.
One way to investigate a significant amount of studies, and synthesize their contributions, is through a Systematic Literature Review (SLR) [5,6]. SLR consists of a rigorous procedure to select and analyze works that fit a research question without following a bias on the part of the researchers. Moreover, the results of SLR may be replicated and extended as needed.
This paper proposes the application of SLR techniques to find a collection of works that proposes EAs to MTSP. In this way, we present a comprehensive survey of these works through a meta-analytic approach, examining authors, institutions, and other aspects. We organize works considering task characteristics, computational environments, and optimization criteria. This survey covers an interval of 30 years (from 1990 to 2019) of research papers; thus, we also map the historical evolution of studies in this scientific field. Our investigation considered four digital libraries that index computing studies; after the three stages of papers analysis, we created a collection composed of 56 studies (This paper is an extension of previous work by the same authors [3]. The original work focused only on journal articles and was limited to studies based on genetic algorithms. Moreover, that study was less rigorous in terms of inclusion and exclusion criteria and considered works until 2018.). The next section of this paper details our research protocol, presenting research questions, search key, and inclusion/exclusion criteria. Afterward, Section 3 presents the collection of 56 papers and provides a meta-analysis of them. Finally, we draw our concluding remarks (Section 4) and expose the references list.
Research Protocol
According to Kitchenham [5], the Systematic Literature Review (SLR) is a process of interpreting and evaluating relevant studies related to a research question, topic, or phenomenon. Consequently, we consider the SLR as an outstanding tool for aggregating experiences from a collection of different works [7].
The execution of an SLR follows some formal steps, named research protocol: (i) limit the research question; (ii) compose a search key; (iii) choose the databases; (iv) establish the inclusion/exclusion criteria; and (v) report the stages of the review.
The first step of an SLR consists of limiting the scope of the research. In the context of this paper, we are interested in mapping the last three decades of EAs applied to the MTSP. Moreover, we desire to characterize the application scenarios of such algorithms, that is, we intend to record which authors are dealing with which particularities of this problem (e.g., presence of communication among tasks, processors heterogeneity, and others).
Having this scope in mind, and concerning the SLR protocol, we formulated the main research question (referred to as MQ), as follows: For survey works corresponding to the questions above, we drew up a search key and applied it to the search mechanisms of the digital databases. For this, we eliminated corresponding keywords and combined different subjects using the logical operators OR and AND, respectively. Since the search key influences the behavior of the search engine, it is necessary to build it verifying each employed term. Table 1 presents the search key constructed in this study. ("evolutionary algorithm" OR "genetic algorithm") AND "task scheduling" AND (parallel OR multiprocessor) AND ("directed acyclic graph" OR DAG OR workflow) AND (representation OR encoding) AND "genetic operators" AND "objective function" In our research, we considered "evolutionary" and "genetic" algorithms as synonymous. Indeed, several authors do not provide a proper distinction between both terms; for this reason, we opted to maintain both expressions and filter results (according to De Jong [4], EAs are a class of algorithms and genetic algorithms are techniques from such class). We also regarded the words "graph", "workflow" and "DAG" in conjunction, as suggested by Robert [1]. Besides, we included both "parallel" and "multiprocessor" terms.
Finally, the search key includes the expressions "representation", "encoding", "genetic operators", and "objective function". All of these are specific for the context of evolutionary algorithms, and we considered them in order to select papers that provide details about algorithm design.
After preparing the search key, it is necessary to select some digital databases. Such databases must contain search engines that reproduce the semantics of the key to find related works. Table 2 shows the bases used in this study. It is necessary to highlight that intended to consider the ACM Digital Library (https://dl.acm.org/); however, its search engine did not reflect the main aspects contained in the search key, resulting in unsatisfactory outcomes when compared to the other databases. Although the search key encompasses several terms related to research questions, the amount of selected works on digital databases can be pervasive. Thus, the protocol used in the SLR must define some inclusion and exclusion criteria that reflect the format and desired aspects of the investigation. The criteria adopted in this protocol are:
1.
Papers must deal with MTSP in its static version and without the use of release times and deadlines (in other words, all information related to the problem is known before scheduling and the execution time is not limited); 2.
The research papers must deal with MTSP using EAs solutions, that is, EAs are the focus of the study; 3.
Selected studies cannot combine EAs with other meta-heuristics; 4.
All the papers must be in the language; 5.
Papers must be available for consultation on Internet platforms.
Based on all the rules defined here, we performed the following steps, named search phase, as required by the SLR protocol (we documented each SLR execution phase using annotations and spreadsheets-thus, we may extend this survey or adapt it to answer other research questions.): 1. Search in search engines: We perform the search step and collected all resulting works for future analysis; 2. Results filtering: we discarded works that did not meet the inclusion/exclusion criteria; 3. Manual search: we carried out a new survey of works by looking at the bibliographic references of the first pre-surveyed collection. We also submitted these new studies to the inclusion/exclusion criteria.
Analysis of the Collection
This section presents the collection of works recovered with the SLR. Each part answers one or more secondary questions presented in our SLR protocol. Thus, in Section 3.1, we answer the SQ.01; Section 3.2 deals with SQ.02 and SQ.03; Section 3.3 answers SQ.04, and Section 3.4 SQ.05. Sections 3.5 and 3.6 are related to SQ.06 and SQ.07, respectively. Finally, we present a summary in Section 3.7, answering our main research question (MQ).
Collection and Publication Timeline
This SLR is part of a more comprehensive research project [3,8]. For this reason, we collected the data into separated moments. The first collection took place in April 2018 and returned 430 works distributed as follows: • However, we observe a total of 50 works duplicated, that is, they appeared in two or more databases; consequently, after deleting duplicates, our research returned 380 papers. From this collection, we selected 120 papers based on information from their abstracts. Afterward, we applied the inclusion/exclusion criteria (second phase of SLR protocol) and reduced the number of selected papers to 36. Finally, in the third phase, we analyzed the section "References" of such papers, selecting more 27 new works; therefore, our first collection had 63 papers.
In order to extend our research and include new articles, covering the years 2018 and 2019, we repeated all those steps, performing a new search. We found 84 new results, but only five of them attempted to the inclusion/exclusion criteria; thus, we generated a final collection of 68 works (this new survey was conducted on February 12, 2020.). Before proceeding with the analysis, we opted to follow an additional exclusion criterion-remove works without the Digital Object Identifier (https://www.doi.org/) (DOI). We took this decision to allow the recovery of detailed information about selected works; after this last step, we eliminated 12 papers, obtained a collection with 56 works. Table 3 presents all the works that compose our collection in chronological order. When two or more papers were published in the same year, we organize them in alphabetical order. In this table, we also present how the work was published, that is, if it is a journal article ("Jour") or a conference paper ("Conf"); in this last case, the publication year refers to the publication date of the conference proceedings. Table 3. Summary of the collection in chronological order. We employ the ID along this paper to referee the works.
ID Reference
Year Type ID Reference Year Type Another essential characteristic to be highlighted is that our research only returned works based on genetic algorithms, which means that our search key, together with our exclusion criteria, eliminates works that design other evolutionary techniques. This situation was expected since we consider keywords "genetic" and "evolutionary"; moreover, EAs tends to appear with other meta-heuristics, violating our inclusion/exclusion criteria.
Environments and Communication Cost
As already mentioned, MTSP can present variations in its multiprocessor system. In this context, papers of our collection deal with two primary environments: (i) homogeneous processors, that is, the execution time of a task does not vary depending on where it is executed; and (ii) heterogeneous processors. There are some papers that present solutions for both scenarios as well as papers that model communication costs. Table 4 lists all the works from the collection organized according to their environments. As observed, most of the works (29 in total) pay attention to the homogeneous environment, that is, when all the processors have the same processing capacity. Besides, 24 papers are related to the heterogeneous environment, and four works [27,28,43,49] deal with both scenarios. Azghadi et al. [53] did not provide sufficient information about where their algorithm is applied; we believe, based on information presented in the publication, that they also considered both.
Based on Table 4, the chart in Figure 3 distributes publications on a timeline (in this chart, we did not consider Azghadi et al. [53].), responding to the SQ.02. In addition to the significant growth between 2010 and 2011, the heterogeneous scenario is also common in the latest studies. This situation may be related to the popularity of MTSP in the Infrastructure as a Service (IaaS) paradigm, that is, virtual machines adjusted by frequency controls. We also analyzed the publications considering the use of the communication cost, that is, when DAG edges have weights. In this scenario, a task sends data (in bytes) to another, which imposes an additional delay in the execution (without communication, the successor task only waits for the execution of its predecessor). Responding to SQ.03, it is possible to observe that most works employ this scenario (about 80%), representing the most substantial volume of publications. Table 5 lists these works.
Optimization Criteria
There are several ways to measure the performance of an MTSP solution. Such measurement is responsible for driving the search process in an evolutionary algorithm. In practice, the quality of a solution is employed in the design of fitness functions that must be optimized. Table 6 lists all the metrics found in the collection in order to respond to the SQ.04. The most widely used metric is the makespan (sometimes referred to as scheduling length), which is present in the vast majority of the works surveyed (55 in total). This metric corresponds to the time instant when all the tasks have finished processing (in Figure 1, for example, the makespan value is equated to 12-time units). Besides makespan, the literature in this area also considered other metrics related to the execution time, that is, flowtime [10,34,58,61] and communication cost [17,64].
Another popular metric is the load balancing [12,17,44,59], which consists of the equitable distribution of tasks over processors. This metric is directed related to better resource utilization: if all the processors are overloaded, it is necessary more investments in the computational infrastructure.
It is also possible to observe the increase, in recent years, in the number of works that consider metrics focused on sustainability [36,48,50,56,60,62]. These works deal with issues related to the temperature and the energy consumption of computers (processors). Sustainability metrics are related to the popularization of distributed systems, mainly because of the adoption of computational clouds. Another recently adopted metric is the financial cost [64], which is related to scenarios that employ IaaS platforms (the performance of virtual machines depends on the frequency values contracted by the client).
All these performance metrics are related to optimization criteria. Most of them (makespan, financial cost, energy consumption, flowtime, temperature, number of processors, and communication cost) must be minimized. On the other hand, system reliability, load balancing, and parallelism are used to design maximization objective functions.
It is essential to highlight that several works consider two or more of these optimization criteria in conjunction. Indeed, 19 papers related to makespan also consider at least one of the other metrics. In most of these cases, works consider a weighted sum to combine these objectives; however, there is also multiobjective evolutionary approaches, based on the concept of Pareto's front [14,36,50,[60][61][62]64]. Finally, only Aguilar and Gelenbe [17] do not directly consider the minimization of makespan, having their focus on load balancing and communication cost.
Number of Citations
In this study, we also computed the number of citations in the work collection, answering the secondary question SQ.05. The chart in Figure 4 classifies the ten most cited publications considering the number of citations returned by the Google Scholar platform. We observe that the most referred publication [13] appeared in more than 900 related works. In fact, the work by Hou et al. [13] is considered one of the most important in this research field, presenting ideas that were extended by several works. Another prominent work is due to Wang et al. [25], with about 500 citations. Additionally, we related which publication vehicles are preferred. The Journal of Parallel and Distributed Computing published a total of six works [12,16,19,25,44,60], followed by the IEEE Transactions on Parallel and Distributed Systems, with five works [13,31,35,43,50]. After them, Information Sciences [17,40] and Microprocessors and Microsystems [29,45] published two papers each; the other two works appeared in the same conference, the IEEE International Conference on High-Performance Computing and Communication, in 2012 [24,32].
Researchers Affiliations
The collection of works also allowed the extraction of information regarding the authors and their respective nationalities. In this context, "nationality" reflects the researcher's country of affiliation; therefore, it does not necessarily represent his country of origin. We identified 145 distinct researchers in 21 countries, responding to question SQ.06. Figure 5 presents an infographic distributing researchers from their affiliation countries. The countries with the most considerable number of researchers are India and the United States, both with 26 researchers each, followed by China, having 18. Brazil, Singapore, and Turkey have one researcher each.
Word Frequency Rank
Finally, in order to identify the main terms adopted in publications, this study also extracted the primary expressions used in abstracts and keywords. In Figure 6a, we quantify the most common keywords found. We also isolate the common expressions found in the abstracts, as presented in Figure 6b. In both charts, we linked singular expressions to their respective plural versions. Moreover, this survey responds to the question SQ.07. The identification of the primary expressions can help in the use of terms more related to researches in this field, consequently helping indexing mechanisms and the search process.
Summary
In this survey, we perform a meta-analysis and answered seven secondary questions (enumerated in Section 2). As a result, it is possible to respond to the main question (MQ) of this study, that is, what are the central works in this research field.
Here we listed the ten most cited works presented in our collection. In order to summarize our answers, Table 7 related some information about these works: reference ("Ref."), title, publication vehicle, and publication year. We also present the characteristics of the environments, that is, if the work deals with a homogeneous or a heterogeneous scenario, with or without communication costs ("CC"). Finally, we provide the Scientific Journal Rankings (SJR), retrieved on March 13, 2020.
Conclusions and Future Research
This paper has reported the results of a systematic literature review (SLR) on evolutionary algorithms for the multiprocessor task scheduling problem. Based on four databases, we generated a collection of 56 papers published in journals or presented in conferences between the years of 1990 and 2019, that is, we covered three decades of studies.
From this collection, we presented an overview of general characteristics from the selected works and, consequently, from this research field. We observed that homogeneous computational environments are still the most common, with 29 published works (about 52% of the collection). Moreover, more than 80% of selected works deal with communication costs. All the papers except one look for the minimization of makespan, sometimes in conjunction with other optimization criteria.
In this survey, we focus on genetic algorithms; this means that we did not cover other evolutionary approaches (such as differential evolution, estimation of distribution algorithms, memetic algorithms, swarm intelligence, and others). We took this decision to limit our research to the most common evolutionary technique.
Finally, from this collection, we can now extract a large volume of information about the structure of GAs. For example, it is possible to analyze details about solution encoding (i.e., chromosome), genetic operators (mutation and crossover), as well as mechanisms to deal with variability and convergence. Further studies may include an experimental evaluation of these design decisions.
Regarding the proposed framework for this SLR, it can be extended in future works to grant other scheduling problems (for example, considering independent tasks) or solving methods. | 4,578 | 2020-04-10T00:00:00.000 | [
"Computer Science"
] |
Second Order Perturbation Theory in General Relativity: Taub Charges as Integral Constraints
In a nonlinear theory, such as General Relativity, linearized field equations around an exact solution are necessary but not sufficient conditions for linearized solutions. Therefore, the linearized field equations can have some solutions which do not come from the linearization of possible exact solutions. This fact can make the perturbation theory ill-defined, which would be a problem both at the classical and semiclassical quantization level. Here we study the first and second order perturbation theory in cosmological Einstein gravity and give the explicit form of the integral constraint, which is called the Taub charge, on the first order solutions for spacetimes with a Killing symmetry and a compact hypersurface without a boundary.
I. INTRODUCTION
Let us consider a generic gravity theory defined (in a vacuum) by the nonlinear field equations E µν (g) = 0, (1) in some local coordinates. We assume the usual "Bianchi Identity", ∇ µ E µν = 0 which plays a central role in the ensuing discussion. The physical situation (the spacetime) as an exact solution is often too difficult to construct. Hence one resorts to perturbation theory around a symmetric background solutionḡ, and expands (1) as where λ is a dimensionless small parameter introduced to keep track of the formal perturbative expansion; and the h and k tensor fields are defined as (3) So as the notation suggests: E µν ) (1) (h) is the linearization of the E µν coming from the expansion of E µν [ḡ + λh + λ 2 k], while the second order terms come in two different form as shown in (2). At the lowest order, one setsĒ µν (ḡ) = 0 and at the first order the linearized field equations read (E µν ) (1) (h) = 0. (4) It is clear that these equations are a necessary conditions on the first order perturbation h defined via (3). But, the crucial point is the following: generically not all solutions of the linearized equations are viable solutions since from (2), at the second order we have the equation: Upon a cursory look, this equation basically says that (E µν ) (2) (h, h) is a "source" for the second order perturbation k. Thus, in principle whenever the operator (E µν ) (1) (.) is invertible one has a solution. Typically, due to gauge invariance (E µν ) (1) (.) is not invertible but after gauge fixing, it can be made invertible. This is a well-known, but easily remediable problem either with some locally or globally valid gauges, such as the de Donder gauge. So this is not the issue that we are interested in here. Even if a proper gauge is found, there are still situations where (5) leads to constraints on the first order perturbation h for a non-trivial solution k. As the basic premise of perturbation theory is its improvability by adding more terms, generically k has to exist without a need to modify the first order perturbation h; stated in another way h must be an integrable deformation. To see the constraints, letξ µ be a Killing vector field of the background metricḡ µν . Then contracting equation (5) withξ µ and integrating over a hypersurface Σ of the spacetime manifold M , one has the constraint where one uses the background metric and its inverse to lower and raise the indices.γ is the metric on the hypersurface. The left-hand side can be written as a boundary term as where F µν (ξ, k) is an antisymmetric tensor field. For more details on this see [1,2]. The left-hand side of (6), when h is used, is called the Abbott-Deser-Tekin (ADT) [3,4] (an extension of the ADM [5]) and the right-hand side of (6) is called the Taub charge [6]. So we have the equality of the ADT and Taub charges as a constraint at the second order in perturbation theory for the case when the background spacetime has at least one Killing vector field: whereσ is the metric on ∂Σ andn ν is the outward unit normal vector on it. If Σ does not have a boundary, then the ADT charges vanish identically and so must the Taub charges. The vanishing of the Taub charges is not automatic, therefore, one has an apparent integral constraint on the linearized solution h as: on a compact surface without a boundary. If this constraint were to be satisfied, then h would be a generic linearized solution which can be added toḡ to improve the exact solution. On the other hand, if (9) is not satisfied, then one speaks of a linearization instability. This issue was studied in various aspects in [7][8][9][10][11][12][13] for Einstein's theory and summarized in [14,15]; and extended to generic gravity theories more recently [1,2]. From these works two main conclusions follow: first, in Einstein's theory, a solution set to the constrained initial data on a compact Cauchy surface without a boundary may not have nearby solutions, hence they can be isolated and perturbations are not allowed; second, for generic gravity theories in asymptotically (anti) de Sitter spacetimes, linearization instability arises for certain combinations of the parameters defining the theory. Regarding (9), the obvious question is whether is a boundary term for Einstein's gravity or not: if it were a boundary term, one would not have the linearization instability observed in the previous works, because it would also vanish identically on a manifold without a boundary. Here for cosmological Einstein's theory we show explicitly thatξ µ (E µν ) (2) (h, h) has a bulk and a boundary part, the later drops for the case of compact hypersurface without a boundary while the former is a constraint on the first order perturbation.
The lay out of the paper is as follows: in section II we give the details of the first order expression for the cosmological Einstein tensor in a generic Einstein spacetime in terms of the perturbation h and give a concise formula in terms of the linearized Riemann tensor for (anti) de Sitter backgrounds using our results [16,17]. In section III we study the second order cosmological Einstein tensor in a generic Einstein background and specify to the case of (anti) de Sitter. In section IV we discuss the gauge invariance issue and relegate some of the computations to the Appendices.
II. FIRST ORDER PERTURBATION THEORY
Here to set the stage, we recapitulate what is already known in the first order perturbation theory in a generic Einstein background. Using the results of Appendix A, one can show that the linearized cosmological Einstein tensor about a generic Einstein space, defined as 1 can be written as a divergence plus a residual part [4,18] ( where the K-tensor reads and the residual tensor reads The background conserved current can be obtained via contracting the linearized Einstein tensor with the background Killing vectorξ ν to get The non-divergence terms cancel upon use of the field equations and therefore one has a pure boundary termξ with It is important to note that (G µν ) (1) is a background gauge invariant tensor, hence the above expression is gauge invariant; but F µν ξ , h itself is only gauge invariant up to a boundary term whose divergence vanishes. The above result is valid for generic Einstein backgrounds. For (anti) de Sitter spacetimes, one can do better and express F µν ξ , h in an exactly gauge invariant way [16,17]. For this purpose, let us introduce a new tensor, which we called the P-tensor, as which has the following nice properties: • It has the symmetries of the Riemann tensor.
• When evaluated for a background Einstein space, it yields and soP νµ βσ =C νµ βσ , where C νµ βσ is the Weyl tensor which vanishes for (anti) de Sitter spacetimes.
Using all these properties, one can show that at first order the covariantly conserved current is a total derivativeξ where the first order linearization of the P-tensor in (anti) de Sitter spacetime reads Making use of this construction one has the conserved charge in a compact form: where we used the fact that (G µν ) (1) = 0 and (R) (1) = 0 on the boundary. Hereσ ν is the unit outward normal vector on ∂Σ. Gauge transformation properties are discussed below in Section IV in more detail. But here, let us note that under a variation generated by the vector field X, which we denote as δ X , one has δ X (R νµ βσ ) (1) = L XR νµ βσ which vanishes for (anti) de Sitter backgrounds (see section III of [17] for more details and for the gauge transformation properties of the expression 16). Let us now turn to our main goal of computing the analogous expression at second order.
III. SECOND ORDER PERTURBATION THEORY
For any antisymmetric two tensor F βσ , one has the exact identity Soon we will choose F βσ to be the potential of the Killing vector field below. Expansion of this identity at second order yields Making use of the first order linearization of the Bianchi-type identity ∇ ν P νµ βσ = 0, that is∇ and takingF ρσ =∇ ρξσ , (22) reduces tō where for notational simplicity, we introduced Rewriting the algebraic decomposition of the Riemann tensor, one finds final expression in terms of the background Weyl tensor as This is still a rather complicated expression having a divergence part and non-divergent parts. What we know is that, one has∇ µ (ξ ν (G µ ν ) (2) ) = 0. The main question was to show thatξ ν (G µ ν ) (2) is not a pure divergence. One can try to simplify (26) further to recast it in a pure divergence form, but there always remain some terms outside the derivative. One can work out the details in the more manageable (anti) de Sitter case for which the Weyl tensor vanishes; and one ends up with From this expression and from (9) one finds that on a manifold with a compact hypersurface Σ without a boundary, all the first order solutions h µν of (G µν ) (1) = 0, must also satisfy the second order integral constraint for n > 3 Any first order solution that does not satisfy this automatically cannot come from the linearization of an exact metric. Stated in a more geometric vantage point, such solutions do not lie in the tangent space of the "point"ḡ in the space of solutions, they are artifacts of linearization. On the other hand, for spacetimes with a hypersurface that has a boundary, the above construction shows that unlike the ADT charge, which is defined on the boundary, the Taub charge has a boundary and a bulk piece. Nevertheless the values of the charges must be equal to each other up to a sign, as in (8).
In the next section, we provide an explicit form of the (G µν ) (2) and the currentξ ν (G µν ) (2) in terms of the perturbation h which is another way to understand our more compact formulation.
Consider a generic Einstein spaceḡ as the background with
Assuming we have the first order field equations (G µν ) (1) = 0 which yields (R) (1) = 0 and The second order cosmological Einstein tensor upon use of the first order equations becomes where the second order Ricci tensor reads More explicitly, one has From now on we will work in a specific gauge to simplify the computations. The transversetraceless (TT) gauge,∇ µ h µν = 0 and h = 0, is compatible with the field equations (G µν ) (1) = 0, which now read¯ In the TT gauge one has Straightforward manipulations yield The second order perturbation of the scalar curvature Combining the above results we can express the second order cosmological Einstein tensor as a divergence and a residual part as where F σλ µν and Y µν are both symmetric in µ and ν. Here the F -tensor reads and the Y -tensor reads So (G µν ) (2) has a divergence part and a part which is not of the divergence type. One can further try to manipulate the Y µν to obtain some divergence terms, but one always ends up with terms which cannot be written as a divergence of any tensor as expected. Letξ be a background Killing vector field. Contraction with the second order perturbation of the cosmological Einstein tensor yields In background Einstein spaces, the last two terms can be written as The important point is that unlike the case of the first order cosmological Einstein tensor as discussed after (14), at the second order the residual parts as given in the last expression do not vanish upon use of the background and first order field equations. To see this more explicitly, let us look at the (anti) de Sitter and flat backgrounds. In (anti)-de Sitter backgrounds one has and the residual part is One realizes that no amount of manipulations can turn these terms into a pure divergence. This is consistent with our compact expression of the previous section. For example for flat spaces, with∇ µ → ∂ µ , one can easily see that one has the non divergence part reads which cannot be written as a pure divergence.
V. GAUGE INVARIANCE ISSUE
The first order linearized cosmological Einstein tensor is gauge invariant for Einstein metrics under small gauge transformations, but the second order cosmological Einstein tensor is not. Therefore, it pays to lay out some of the details of these and the gauge transformation properties of the tensors and currents we have constructed. Under a gauge transformation generated by a vector field X, the first order metric perturbation changes as As noted above, it is easy to see that (G µν ) (1) is gauge invariant once the background space is an Einstein space. But (G µν ) (2) is not gauge invariant, in fact a pure divergence part is generated. Let us show this in a systematic way following [10]. Let λ ǫ R and ϕ be a one parameter family of diffeomorphisms acting on the spacetime manifold ϕ : R × M → M , then diffeomorphism invariance of a tensor field T means where ϕ * is the pullback map. Let us denote the diffeomorphism by ϕ λ and assuming ϕ 0 to be the identity map. Differentiating (49) with respect to λ yields Using the chain rule one has where D denotes the Fréchet derivative and L X denotes the Lie derivative along the vector field X. In local coordinates for a rank (0, 2) tensor field-which is relevant for field equationthe last expression yields Specifically for the cosmological Einstein tensor T µν = G µν , we have which is a statement of the gauge invariance of the first order linearized cosmological Einstein tensor. For the second order tensors, we can take another derivative of (51) to get which yields in local coordinates When T µν = G µν , we obtain The right-hand side is zero for linearized solutions; and one obtains The right-hand side of this expression is not zero but it can be written as a pure divergence term proving our earlier claim. We give a more direct, albeit highly cumbersome derivation of this expression in Appendix B using the explicit form of (G µν ) (2) . Let us now study the gauge transformation of (27) and see explicitly that the right-hand side is a pure boundary. The first order linearized (P tensor reads which is gauge invariant under the small coordinate transformations for (anti) de Sitter backgrounds as it can be seen from (51). Defining c = (n−1)(n−2) 4Λ(n−3) , we have (59) Since the first two terms are already boundary terms, let us consider the last part: where we used (79) of Appendix B. One can rewrite this as By using∇ νF ρσ =R γν ρσξγ , one has Therefore the Taub current is not gauge invariant as expected, under gauge transformations a boundary part which is composed of the first part of (59) and (62) whose divergence vanishes, is generated.
VI. CONCLUSIONS
In a nonlinear theory, validity of perturbation theory about an exact solution is a subtle issue. In General Relativity, if the background metricḡ, about which perturbation theory is performed, has Killing symmetries, there are constraints to the first order perturbation theory coming from the second order perturbation theory. We have explicitly studied the constraints and have shown that the Taub charge, which is an integral constraint on the first order perturbation, does not vanish automatically. We have identified the bulk and boundary terms in the conserved current √ −ḡξ µ (G µν ) (2) · (h, h). This issue is quite important when one looks for the perturbative solutions in spacetimes with closed hypersurfaces and it is also relevant for semi-classical quantization of gravity in such backgrounds.
From another vantage point, one can understand these results as follows: the solution space of Einstein equations generically form a manifold except at solutionsḡ that have Killing fields. Around such a metricḡ, the linearized field equations which yield the tangent space of the solution space give a larger dimensional space. Therefore the linearized solutions yield some nonintegrable deformations. One pays this at the second order where there is a constraint on the first order solutions.
VII. APPENDIX A: SECOND ORDER PERTURBATION THEORY
Let us summarize some results about the second order perturbation theory (see also [19]). Assumingḡ µν to be a generic background metric, by definition one has with an inverse Let T be a generic tensor depending on the metric, then it can be expanded as The Christoffel connection reads where the first order term is and the second order expansion is Since it is a background tensor, we can raise and lower its indices withḡ µν . Our definition is (Γ µνδ ) (1) :=ḡ γδ (Γ γ µν ) (1) .
The first order linearized Riemann tensor is and the second order linearized Riemann tensor is The first order linearized Ricci tensor is and the second order linearized Ricci tensor is The linearized scalar curvature is and the second order linearized scalar curvature is The cosmological Einstein tensor at second order reads We have already given the first order form of the cosmological Einstein tensor in section II.
VIII. APPENDIX B: GAUGE TRANSFORMATIONS
Lie and covariant derivatives do not commute; but, sometimes we need to change the order of the these two differentiations. First we provide some identities which can be easily proven from the definitions. Under a gauge transformation, We used this form in the text. For a generic rank (m, n) tensor field, one can prove the following expression: The second order Ricci tensor (73) transforms as Using δ X h ρβ = −L Xḡ ρβ , one has Then After using the identity (80), one gets which simplifies to where the last four terms yield the Ricci tensor evaluated at the Lie derivative of the linear metric perturbation. Finally we can write For the gauge transformation of the second order linearized scalar curvature we need to compute After a straightforward calculation, the result turns out to be We can collect these pieces to write the gauge transformation of the second order cosmological Einstein tensor in a generic background as Using the above results, the last expression becomes δ X (G µν ) (2) + (G µν ) (1) · L X h = L X (G µν ) (1) .
This result has been general and we have not used any field equations or their linearizations. When h is solution to the first order linearized cosmological Einstein tensor, the right hand side of the last expression vanishes, and we have which shows the gauge non-invariance of the second order cosmological Einstein tensor. Now let us consider the contraction of the result with a background Killing vector field sinceξ ν (G µν ) (1) can be expressed as a boundary term,ξ ν (G µν ) (1) · L X h can also be expressed as a boundary term. Recall that we havē where F νµ is antisymmetric in its indices. By expressing F νµ and using L X h instead of h, we can obtain the boundary of the left-hand side. Since we havē with the superpotential given as andh µν := h µν − 1 2ḡ µν h, we can writē where K µανβ evaluated at L X h is which altogether shows that under gauge transformations the Taub charge produces a boundary term. | 4,818.8 | 2019-03-28T00:00:00.000 | [
"Physics"
] |
Atoms in Gaseous and Solid States and their Energy and Force Relationships under Transitional Behaviors
By recalling the conventional insights of different atomic states, it is possible to discover new insights, which can cope the existing challenges. In fact, atoms consist of electrons and energy knot nets. In atoms of all elements, suitably intercrossed overt photons form or construct energy knots. In growing atoms of gaseous and solid states, schemes of intercrossing overt photons become different. To construct atomic lattice in any element, overt photons in suitable length and number intercross by keeping the centers of their lengths at a common point. A scheme of intercrossing overt photons frames energy knots simultaneously clamping to positioned electrons. Atoms are differentiated on the basis of their different numbers of energy knots and electrons. A number of unlled states in an atom represents a valency. Excluding hydrogen, atoms possess the same valence number as specied for them. However, two more electrons with two already prescribed ones for the rst shell form the zeroth ring of the atom. In the hydrogen atom, only two electrons are available for two energy knots; two overt photons of the least measured lengths intercross to form the shape like digit eight. In this way, four electrons are clamped by four energy knots to form helium atom. Thus, a helium atom is related to a zeroth ring in all higher order atoms. In order to validate these aforementioned statements, the concept of studying protons and neutrons is no longer signicant. As far as the atoms of gaseous state are concerned, electrons possess the minimum required potential energy. In this way, electrons of gaseous atoms remain above the middle of clamped energy knots in more than half the length, and they keep on experiencing the maximum required levitational force along the north pole. In atoms of solid state, electrons possess the maximum required potential energy. In this way, electrons of solid atoms remain below the middle of clamped energy knots in more than half the length, and they keep on experiencing the maximum required gravitational force along the south pole. Each transition state of the atom is under the established relation of energy and force. Under transitional energy of an atom, electrons undertake innitesimal displacements within the clamped energy knots, where orientational force keeps on engaging them to introduce the recovery, neutral, re-crystallization and liquid states. Electrons left to the center of atom orientate from north to east clockwise and electrons right to the centre of atom orientate from north to west anti-clockwise during the conversion of gaseous atom to liquid state. On the other hand, electrons left to the center of atom orientate from south to east anti-clockwise and electrons right to the center of atom orientate from south to west clockwise during the conversion of solid atom to liquid state. These fundamental revolutions shed new light on the development of sustainable science and engineering.
Introduction
Understanding the mechanism of atomic formation and, then, relating atoms with each other help to develop sustainable science behind the technologically important materials and their applications. The Periodic Table shows the position of atoms belonging to various elements in the form of rows and columns by referring to their characteristics based on atomic number, mass number, valence number, electronic con guration, atomic radius, electronegativity and shielding effect, etc. The Periodic Table also provides the information of lled and un lled states of electrons, where valency of the atom can be assigned. However, lled and un lled states in atoms of different elements are in different ways.
The lattice (energy knot net) of carbon atom is the same for different allotropes, but each allotrope has different position of lled and un lled states [1]. Solid atoms belonging to different class of elements elongate through experiencing the surface forces at an appropriate level of ground surface [2]. The developing mechanism of various tiny-sized particles has been explained, where gold atoms possess different behaviors of the electronic structure [3]. Transitional behavior atoms of gold element formed monolayer assembly on the solution surface, where triangle-shaped tiny particles developed under the supplied packets of nanoenergy [4]. Structural evolution of atoms executing con ned interstate electron dynamics involves conservative forces, which has been deliberated in a separate study [5]. The phenomena of heat and photon energy have been explored, where electron dynamics of the silicon atom converted heat energy into the photon energy [6]. These studies indicate that atoms belonging to different elements exhibit different electronic structures to the existing ones.
Regardless of that, mercury belongs to transition metals group, where it neither shows the solid state nor gaseous state, but it behaves like liquid (such as bromine). Metals such as cesium, gallium and rubidium remain in solid state at room temperature, and start melting above the room temperature. This displays an important role of lled and un lled states belonging to the outer ring of atoms. Further, inert gas atoms do not show any sort of a nity with atoms of other elements because of the inertness. In this way, they do not bind to evolve the structure; they split under the exceeded propagation of photons having characteristics of current [2]. Therefore, elongation or deformation of solid atoms and splitting of inert gas atoms indicate their different behaviors in terms of energy and force. In this study, a basic relation of energy and force in different-natured atoms when dealing with respective transition states is explored.
At suitable concentration of gold precursor, a large number of tiny-sized particles developed having the triangular shape [7]. Morphological structure of gold particles was developed through different bipolar and unipolar pulses [8]. In tiny-metallic colloids, the incompatible packing developed the particles of distorted shapes and compatible packing developed the particles of geometrical shapes [9]. In the pulsebased process, the processing of gold solution developed particles of geometrical shapes and the processing of silver solution did not develop the particles of geometrical shapes [10]. Particles of unprecedented shapes were developed, thereby identifying the speci c role of energy and force in the process of developing [11]. The use of tiny-sized particles for nanomedicine can be either effective or defective because of the varying behavior of comprised atoms [12]. These studies deduce a different behavior of atoms to the existing one.
Atomic structure of different carbon allotropes along with their binding has been elaborated [1]. Different results of testing and analysis from different regions of a deposited carbon lm explain that how di cult it is to reach for an appropriate conclusion [13]. Morphology and structure of particles in depositing carbon lms altered under the variation of localized process conditions [14]. Carbon lms developed different morphologies and structures of grains and particles because of having different set chamber pressures [15].
The possibility of assembling colloidal matter into meaningful structure enables atoms and molecules to be candidates for future materials [16]. Understanding the individual dynamics of formation of tiny-sized particles is essential prior to their assembling into useful large-sized particles [17]. Hard coating is due to the varyingly switched energy and forced behaviors of gaseous and solid atoms, where non-conservative energy is engaged at the rst stage [18].
Sir Isaac Newton formulated the laws of motion and universal gravitation. The law of universal gravitation involves the mathematical description of gravity. Sir Albert Einstein developed the general theory of relativity along with mass and energy relationship and the principle of relativity was further explained by extending it to the gravitational eld, where the concept of anti-gravity (levity) was not incited; the general theory of relativity remained only a model to a large-scale spectrum structure. The different models such as Rutherford's atomic model and Bohr's atomic model are available in the literature de ning the atomic structure. In addition to these, Yukawa's theory explains the stability of nucleus (neutron to neutron binding) in an atom.
Solid atoms are eligible to evolve different structures when they treat below the suitable level of ground surface. Their ground points also initiate binding below the ground surface. Solid atoms under transitional behavior are eligible to develop different structures when the exerting forces to electrons are in surface format [3,4,[7][8][9][10][11]. Based on these observations, the structure evolution in solid atoms has been discussed, where interstate electron dynamics of atoms involved conservative forces [5]. This indicates again a different behavior of atoms instead of the existing ones describing shells, orbits, band gap, fermi levels, nucleus, etc.
The previous electronic structures describe the atoms with respect to orbital con gurations and shells. In the latter one, the nucleus of an atom also describes the protons and neutrons. More oftenly, the electronic structures describe the atoms with the quantized states in recent works. But the science of materials raises a fundamental question that how the atoms form. Why do atoms exist in gaseous and solid states? What kind of descriptive mechanism do they require in their formation? Orientation force exerting at the electron level should depend on the atomic state, i.e., gaseous, semisolid or solid. In this way, atoms of different elements should deal with the different levels of force. Accordingly, atoms of different elements should deal with the different levels of energy. So, a generalized relationship between energy and force is also presented when atoms are dealing with the transition states.
Results And Discussion
Depending on the attained dynamics and transition state, atoms develop different tiny-sized particles [3]. To develop a mono layer tiny-shaped particle, solid atoms rst arrange in the compact monolayer assembly [4]. A structure evolution in atoms of the suitable elements has been discussed, where con ned interstate electron dynamics involves the conservative forces [5]. At different chamber pressures, carbon lms were synthesized in discernable features of morphology and structure [15]. Incompatible working energy and forced behaviors of gaseous and solid atoms develop hard coating [18]. These studies show that atoms possess the different energy and force behaviors.
Atomic structure of the carbon atom in different allotropic forms was discussed [1]; a lattice or 'energy knot net' in each allotropic form of the carbon atom remained the same. A lattice of carbon atom constructed by the overt photons when having the suitable length and number was also discussed in the study. Overt photons are the subsets of the main stream photons [6]. Atoms do not ionize, they modify into the other forms [2]. This indicates that the centre of an atom belonging to any element does not involve mass of the electron. So, a centre of the atom should be at the point of intercrossed overt photons.
In the formation of lattice belonging to any atom, overt photons having the appropriate lengths and numbers intercrossed by keeping the centre of their lengths at a common point. The force and energy of intercrossed overt photons should remain the actual to construct energy knots of lled and un lled states of the atom.
Overt photons when in the suitable lengths, they design the lled and un lled states for different atoms. Overt photons construct lled and un lled states of energy knots as per the requirement of the atoms. In this way, atoms of different elements differentiate from each other. Intercrossing the overt photons to construct lattice of the atom (in any element) is in such manner that energy knots clamp positioned electrons. In the atoms of gaseous, semisolid and solid elements, a scheme of intercrossing overt photons becomes different.
Addition of two more electrons in the central ring of any atom, excluding the hydrogen atom, is required to form the zeroth ring. Atoms are already known to have rst shell, which has occupancy of two electrons only excluding the hydrogen element. However, in the present case, the rst shell is a zeroth ring, which needs four electrons. Therefore, an atom requires two more electrons to form the zeroth ring. A zeroth ring can be termed as nucleus. In the case where no electron is available for empty energy knot, it is referred to as un lled state. In an atom, number of un lled state indicates a valence number for the element. When surface force is exerted to electrons of solid atom at the appropriate level of ground surface, energy knots clamped electrons are stretched along the both sides (poles) from the centre [2].
Atoms consist of electrons clamping by the sizeable energy knots along with un lled states, too. A least measured length overt photon is formed by the length of two 'unit photons', where each unit photon has a shape like 'Gaussian distribution of both ends turned'. When two least measured length overt photons intercross, they form the knot through the intercrossing. So, a shape of a tilted digit 'eight' is formed, which is related to the lattice of hydrogen atom. The intercrossing of two shapes of tilted digit 'eight' forms the molecular hydrogen, where number of electrons becomes equal to number of electrons in helium atom. However, a helium atom contains four electrons under the originally built-in lattice of energy knots instead of separately intercrossed two shapes of tilted digit 'eight'. In the lattice of helium atom, on intercrossing four overt photons having the least measured lengths, two shapes of digit 'eight' design.
Following the zeroth ring, atoms contain either rst ring, second ring and so on. In this way, arrangement of electrons in the available rings (of different atoms) other than the zeroth rings is in the same manner as previously studied. Nevertheless, two more electrons are added to shape the zeroth ring in atoms of all elements except hydrogen. The addition of two more lled states in the zeroth ring, a net of energy knots in atom of any element follows the same description of lled and un lled states as studied previously, except hydrogen element. In atoms of various elements other than hydrogen element, the central four lled state electrons form the zeroth ring.
Two overt photons of the least measured length form a tilted digit 'eight' is shown in Fig. 1 (a). The electronic con guration of hydrogen atom, hydrogen molecule and helium atom are displayed in Fig. 1 (b), (c) and (d), respectively. When two photons of the least measured lengths intercrossed, they form a shape of tilted digit 'eight', which is the lattice of hydrogen atom as shown in Fig. 1 (a). Electrons of the tiniest mass trapped in the empty spaces formed by the energy knots (black and green ones) are shown in Fig. 1 (b). Two hydrogen atoms overlap to form the molecular hydrogen as shown in Fig. 1 (c). The structure of helium atom is shown in Fig. 1 (d).
Atoms of some elements can keep empty spaces left at the outer ends of constituted chains. In an atom, terminated ends of chains are related to the outer ring. An empty space is exactly in the size of electron. For example, an argon atom might have eight empty spaces in the outer ring in addition to eight lled states as indicated by arrows in Fig. 2. Those eight empty spaces might not be related to un lled states.
Here, to build shorter chains of states for each case, intercrossed overt photons construct a chain of states having length which is short by a unit photon at both ends. The presented observation justi es the sketched model of argon atom displayed in Fig. 2.
A structure of lithium atom is shown in Fig. 3. The zeroth ring is related to nucleus. The outer ring is related to the rst ring, which is also displayed in Fig. 3. In Fig. 3, lithium atom has a large volume to store energy as arrowed in the regions labelled by 1, 2, 3 and 4. Due to this capacity of storing energy, the structure of lithium is considered quite suitable for energy storage. A lithium atom contains two chains of states as labeled in Fig. 3.
An atom describing valence number involves both lled and un lled states in the outer ring. However, in order to execute interstate electron dynamics, either non-con ned [1] or con ned [5], an atom requires a suitable position of lled and un lled states in the outer ring. Inert gas atoms neither undertake con ned nor non-con ned electron dynamics. More work is required to understand the nature of atoms belonging to elements of inert behavior. A carbon atom remains in gaseous state, semisolid state or solid state depending on the position of electrons and un lled states in the outer ring. By changing the position of an electron in the nearby suitable un lled state, a carbon atom gets converted into another state [1]. The available un lled states or empty energy knots in the outer rings of different atoms are according to the prescribed numbers of electrons and valency. In hydrogen atom, one more electron is required to ll second state of electron. A hydrogen atom does not contain the zeroth ring due to the presence of two electrons in total. The zeroth ring of an atom in all elements except the hydrogen atom can be termed as nucleus. In this way, helium atom is only related to the zeroth ring having no further ring.
A centre of the atom is located at the center of length of each intercrossed overt photon. Therefore, when the electrons have more than half of the mass (length) to the upward sides (along the north pole) from the mid of the clamped energy knots, atoms behave in the gaseous state. In the gaseous atom, energy knots clamped electrons undertake contraction as per potential energy of electrons. However, when the electrons have more than half of the mass (length) to the downward sides (along the south pole) from the mid of the clamped energy knots, atoms behave in the solid state. In the solid atom, energy knots clamped electrons undertake stretching as per potential energy of electrons.
In Hence, in the solid atoms when dealing with the liquid states, there is a direct relationship between 'E T ' and 'F G ' as sketched in Fig. 4 (b).
In the conversion of a solid atom from original state to liquid state, 'E T ' absorbed by the atom is directly proportional to engaging 'F G ' exerting at the electron level as indicated in Eq. (2). In gaseous and solid atoms, a zone related to the exerting impartial force at the electron level is discussed elsewhere [5].
In an atom, a state of electron is related to un lled state and a state of valence number is related to un lled state. States of electrons ( lled states) and valence number (un lled states) are referred to in the same way in atoms of gaseous, semisolid and solid states. However, nets of energy knots formed by the intercrossing of overt photons in atoms of gaseous states clamp electrons from the downward sides. In the atoms of solid states, nets of energy knots formed by the intercrossing of overt photons clamp electrons from the upward sides. Hence, in the atoms of semisolid states, nets of energy knots formed by the intercrossing of overt photons clamp electrons of laterally-orientated position from the centers (mid). Therefore, the formation of schemes of lattices in atoms of gaseous, semisolid and solid states is different, but atoms keep the conserved amounts of force and energy in the original format of ground points -gaseous atoms in the space format, semisolid atoms in the surface format and solid atoms in the grounded format.
In the transition state, either in gaseous atom or in solid atom, electrons undertake in nitesimal displacements by remaining within clamped energy knots. For this reason, the relation of energy and force in atoms of gaseous and solid states has been discussed above. A gaseous or solid atom undergoes liquid state by varying the potential energy of comprised electrons, where electrons remain clamped by the respective energy knots. For gaseous atoms, when are in liquid state, electrons undergo in nitesimal displacement to downward sides, where the lengths (of electrons) become nearly halfway to mid of the clamped energy knots. The electrons of solid atoms in liquid state undergo in nitesimal displacement to upward sides, where the lengths (of electrons) also become nearly halfway to mid of the clamped energy knots.
As per natural phenomenon, the formation (growth) of atoms of different elements is in the zones allocated for them. Here, an electron is not discussed in the context of negative charge, but it is discussed as a particle. A particle of the smallest entity of mass which an atom keeps to shape the electronic structure. An electron of any atom is the smallest unit of concrete mass. It forms the basis of an atom in terms of exerting forces (gravitation, levitation and surface) and varying potential energy.
In the formation (growth) of certain natured atoms, some of the energy knots neither work for lled states nor for un lled states, which remain folded by neighboring chains of energy knots. Folded energy knots in different chains are shown in the atomic structure of titanium [18]. The trapping and capturing of pieces of electron are particularly in the regions of zeroth rings belonging to atoms of suitable elements. Many overt photons intercrossed (by keeping centers at a common point) under a particular scheme to construct a required number of chains (of states) in an atom. However, only four electrons (of complete shape and size) in an atom (except hydrogen atom) are eligible to settle in the zeroth (central) ring.
Particles of the fractional sizes of an electron may be trapped or captured by the folded energy knots not working for lled states and un lled states in certain behavior atoms. This kind of work can be studied in different branches of physics. The broken pieces of matter though smaller in size to electron can further diversify particle physics and neutrino physics.
Electrons of suitable atoms when undertake (non-stop) in nitesimal displacements within the clamped energy knots (where atoms do not deal with the elongation or deformation), they can generate radiations of different types (rather than photons). (When the required amount of heat energy was available, a unit photon generated by the forward direction cycle or reverse direction cycle of con ned interstate electron dynamics of silicon atom [6].) Solid atoms when deal with the transition states and electrons are jammed within the clamped energy knots under the availability of tits and bits of heat energy, they do not deal with the elastically driven electronic states. So, under plastically driven electronic states, solid atoms keep the elongation or deformation. It means that atoms have been solidi ed. This is also the case in arrays of solid atoms as they convert into the structures of smooth elements [4].
To clamp the tiniest masses called electrons by the net of energy knots, it is required to form lled and un lled states of an atom as discussed above. Formation (growth) of the highly puri ed form of matter in the smallest shape is an unprecedented phenomenon of nature. Formation of the tiniest matter and nets of energy knots to shape atoms requires a suitable environment. So, the formation (growth) of atoms locates a suitable environment to reveal the features. The characteristics of atoms for each element are different, which can be categorized in already named classes, such as gaseous atoms, semisolid atoms and solid atoms. Hence, the growth of atoms in any class of element is in its respective environment. Naturally, this is according to the conditions that are required to grow atoms of a distinct nature.
Formations of energy knots and electrons are related to one of the most extra-ordinary processes. So atoms of different elements grow at suitable places of the exerting forces.
Atoms of gaseous states grow in their respective environment, which can be in different zones of space, so astronomers, environmentalists, chemists, space scientists and those working in the allied areas can look into the nature of growing atoms. Atoms of semisolid states grow at ground level, so electrical engineers, earth scientists, physicists, environmentalists and those working in the allied areas can look into the nature of growing atoms. Atoms of solid behaviors grow below the ground surface, so metallurgists, geologists, chemical engineers, chemists, paleontologists and those working in the allied areas can look into the nature of growing atoms.
Different behaviors of atoms discussed here en ame new insights. Atoms start functioning for the possible transition states while dealing with the varying energy and force. The presented scheme of atoms allows one to develop atoms with different lattices and electrons, so it works for the new diversity of matter. As this study deals with the general discussions, where incremental changes depend on the schemes of intercrossing photons, lled and un lled states, orientation force, potential energy of electrons and distribution of the electrons in atom. So, there can be a room for further discussions and such investigations lead to attest sustainable utilization of resources.
Conclusion
The formation mechanisms of atoms disclose in different states. So, atomic structure in different elements is elucidated. In transitional behavior gaseous atoms and solid atoms, relationships between energy and force are also explained.
Electrons of the tiniest mass are clamped by the energy knots. Energy knots are formed or constructed by the intercrossing of overt photons when they keep the centers of lengths at a common point. Energy knots of lled and un lled states (or only lled states) of different atoms get formed by the intercrossing of overt photons under a particular scheme. The schemes of intercrossing overt photons construct states of electrons and un lled state(s) in a different manner for atoms of gaseous and solid states. It is also the case in semisolid atoms. In the scheme of gaseous atoms or solid atoms, atoms specify the different number of lled and un lled states. The number of intercrossing overt photons having particular lengths are according to the number of electrons and valence number that an atom keeps for the element.
In the original state of an atom, energy knots clamped electrons keep them in the states. In the intercrossing of overt photons to design a lattice of any atom, the element of force remains dressed up by the energy. In addition to the prescribed number of electrons of different atoms, the addition of two more electrons in the central rings form their zeroth rings. Except hydrogen, atoms of all elements possess two additional electrons along with the already designated two electrons. So, in the new scheme of an atom, altogether four electrons form the zeroth ring. Hence, an atom does not require protons and neutrons to de ne the nucleus. At place of rst shell, a rst ring is studied. Instead of orbits, shells or quanta, atoms form the zeroth ring and number of rings (/outer ring) to keep lled and un lled states.
The shape of the zeroth ring is like a 'cross', where two shapes of 'eight' digit intercross to keep four electrons. An atomic structure of helium is identical to the zeroth ring, which exists at place of nucleus in the atoms of all elements except hydrogen atom. A zeroth ring in an atom is related to the central ring. In hydrogen atom, one more electron is required along with the one previously designated. However, it does not form a zeroth ring. A structure of hydrogen atom is half to the structure of zeroth ring or helium atom. A shape of digit 'eight' indicates the lattice of hydrogen atom. By lling two electrons in hollow spaces of digit 'eight', a hydrogen atom is formed. The overlying two hydrogen atoms form a hydrogen molecule, but not the way helium atom forms a structure. A helium atom itself is related to a zeroth ring.
In gaseous atoms, electrons keep more than half of their length above the mid of clamped energy knots along the north poles. Atoms in gaseous state possess the minimum required potential energy, so energy knots clamped electrons deal with the maximum required contraction. In solid atoms, electrons keep more than half of their length below the mid of clamped energy knots along the south poles. Solid atoms possess the maximum required potential energy, so energy knots clamped electrons deal with the maximum required stretch. | 6,592.6 | 2020-10-07T00:00:00.000 | [
"Physics"
] |
An Enhanced Random Vibration and Fatigue Model for Printed Circuit Boards
Aerospace vehicles are mostly exposed to random vibration loads during its operational lifetime. These harsh conditions excites vibration responses in the vehicles printed circuit boards, what can cause failure on mission functionality due to fatigue damage of electronic components. A novel analytical model to evaluate the useful life of embedded electronic components (capacitors, chips, oscillators etc.) mounted on Printed Circuit Boards (PCB) is presented. The fatigue damage predictions are calculated by the relative displacement between the PCB and the component, the lead stiffness, as well the natural vibration modes of the PCB and the component itself. Statistical methods are used for fatigue cycle counting. The model is applied to experimental fatigue tests of PCBs available on literature. The analytical results are of the same magnitude order of the experimental findings.
INTRODUCTION
Aerospace vehicles (airplanes, launcher vehicles and satellites) are exposed to harsh random vibrating environments.These mechanical stresses occur during qualification and acceptance tests, launching and orbit injection (spacecraft) and during continuous flight operations (airplanes).The aerospace vehicles shall be designed to withstand these conditions, providing a safe environment to the onboard equipment.Electronic equipment is constituted mainly of Printed Circuit Boards (PCB) and a support structure, which shall ensure a proper mechanical, thermal and electrical environment to the embedded electronic components (EC), as capacitors, resistors, chips, oscillators etc.The design of aerospace equipment shall be capable to predict the time to failure of the electronic Latin American Journal of Solids and Structures 14 (2017) 2402-2422 components.Random vibration is the common environment of aerospace vehicles, then, fatigue calculations are treated statistically.Wong et al. (2000), Wu (2009) and Grieu et al. (2008) presented stress calculations at PCBs and ECs based on accurate Finite Element Method (FEM) modeling.The methodology consisted of meshing the entire PCB and the electronic component parts, as leads and body.This approach seems to be too time consuming when applied to PCBs with several components.Cifuentes (1994), Yang et al. (2000), Amy et al. (2010Amy et al. ( , 2006) ) and Sayles and Stoumbos (2015) proposed simplified FEM models for calculation of PCB's eigenvalues and eigenvectors.The mounted components on the board were modeled by updating the mass and stiffness properties of the PCB.
Experimental vibration tests were presented by Singal et al. (1992), Amy et al. (2010Amy et al. ( , 2009) ) and Yang et al. (2002).Veprik and Barbitsky (2000), Veprik (2003) and Esser and Huston (2003) presented experimental data for damped systems.The experimental approach validates only the tested or similar setups, so additional tests are necessary for different configurations.Silva and Gonçalves (2013) proposed an analytical model for lead stress prediction.The leads were modeled as beams, the EC as a 6-degree-of-freedom rigid bodies and the PCB as a simply supported plate.These authors did not apply a fatigue model for component life evaluation.Steinberg (2000) proposed a semi-analytical method for estimating the useful life of electronic components mounted on PCBs under random vibrations.The PCB was modeled as a beam.The beam displacements were taken into account, but the rotations not.The fatigue damage prediction was performed for an electronic component (EC) in the center of the board, and the results were approximated elsewhere.Steinberg's method is one of the methods most used in aerospace industry.
The objective of this work is to present a novel accurate analytical model with low computational cost for fatigue life prediction of electronic components mounted on PCBs, which typically fails at the leads.The loads at the leads are calculated by the relative displacements and rotations between the PCB and the EC when the system is excited.The PCB is modeled as a thin plate, considering a uniform smearing of electrical component masses.The EC is modeled as: a) a rigid body and b) a system of two perpendicular beams (3-degree-of-freedom system).The fatigue life is evaluated by Dirlik (1985) statistical fatigue damage model.
The present work enhances the well-known semi-analytical method developed by Steinberg (2000) by proposing a plate model (instead of a beam) to the PCB, adding the contributions of the PCB rotations, the EC body flexibility and the EC first natural frequency.A similar model seems not to be available in the literature.Also, the present work adopt the Dirlik's random fatigue damage counting method (instead of the 3-band technique), one of the methods most used in industry for fatigue calculations.Steinberg's method compensates the lack of these features by the use of the experimental constant Cst , defined in his book for each type of electronic component.The Steinberg's method is given by where, N is the cycles to failure, BPCB is the PCB length, L is the EC length parallel to BPCB , r is a factor given by a sine function to account the EC position on the PCB (r = 1 in the PCB's center, r Latin American Journal of Solids and Structures 14 (2017) 2402-2422 < 1 in the other regions), Cst is a constant for different types of ECs, tPCB is the PCB thickness and Zmax_rms is the root mean square of the PCB maximum displacement, all given in millimeters.
ELECTRONIC COMPONENT LEADS FATIGUE
Electronic components can be simply approximated by three different parts.The main part is the component body, usually covered by a case made of plastic or ceramic.The leads connect the component to the PCB circuitry.The third part is the solder joints, which attach the leads to the board.The typical failure of electronic components under vibration is the fatigue of leads and/or solder joints due to the relative displacements between the component and the PCB.
The lead is assumed as a beam with bending, axial and torsional stiffness.The lead geometry is modeled with the polygonal form exhibited in Fig. 1.For simpler cases some quotas can be set to zero.
The lead fatigue life model is presented below.Section 2.1 describes the lead stiffness calculations, followed by the lead stretch model definition on Section 2.2.The formulation is extended to account for the component body flexibility (Sections 2.3 and 2.4) and PCB bending rotations (Sections 2.5 and 2.6).At last, the stresses (Section 2.7) and the fatigue damage (Section 2.8) complete the model.
Lead Stiffness
Figure 2a shows the free-body diagram of a lead when the component moves perpendicular to the PCB plane (lead longitudinal direction).The model considers that the leads are clamped on the component body (extremity 1), which is so far adopted rigid, and the deformed PCB applies an external force P at extremity 2 (solder joint).Figures 2b and 2c present the free-body diagrams when the PCB curvature introduces bending and torsional moments, M and T, respectively, at point 2. Based on Euler-Bernoulli beam theory, the lead stiffness in the lead longitudinal direction due to a force P is given by ( ) ( ) where E is the elasticity modulus of the lead, Iz'i and Ai are respectively the moment of inertia parallel to z' axis taken at the section geometric center and the cross section area of the part i, i =1,2 (b1, b2), as depicted in Fig. 1 (lateral view 2).Similarly, the bending and torsional stiffness are given by, respectively, where G is the shear modulus of elasticity, Ji and Ix'i are respectively the polar moment of inertia and the moment of inertia parallel to x' axis taken at the section geometric center of the part i, i =1,2 (b1, b2).
Lead Stretch
The lead stretches are due to the PCB displacements and the EC acceleration.The PCB is modeled as a simply supported rectangular plate of dimensions a × b .The x-y coordinate system origin is located on the PCB left lower corner, and the xc-yc EC local coordinate system is located on component center, as depicted in Fig. 3.The leads local coordinate system x'-y' is perpendicular to the plane x-y of the PCB: x' lays on the x-y plane and y' is perpendicular to this plane.The Wmn natural frequencies of the PCB and its Zmn displacements are given by (Blevins 2001) ( ) where m and n are the number of half waves in x and y directions, respectively.EPCB, tPCB, ρPCB and υPCB are the PCB elasticity modulus, thickness, density and Poisson's ratio.The PCB density ρPCB is the ratio between the total mass of the system (PCB plus ECs) and the volume of the PCB.
Bmn(W) determines the vibration magnitude.For a uniform acceleration distribution α over the PCB, Bmn is given by where So far, it is assumed that the PCB and the leads are both flexible, all the displacements are small (senθ ~ θ), and the electronic component body is a rigid plate.Then the transverse (lead longitudinal) displacements can be written as and Structures 14 (2017) 2402-2422 ( where Z0 stands for displacement in z direction at the center of the EC ( x e = , y f = ) and θc (or ϕc) is the rotation around x (or y) axis, as displayed in Fig. 4.These values are determined considering a uniform distribution of springs connecting the component to the board and the equilibrium equations (details at Appendix A).Now, it is possible to determine the δcg,mn lead stretch in the position (x,y) and in the W frequency due to the PCB vibration as cg mn c mn mn mn mn where δmn is a transfer function from Bmn to δcg_mn .
In addition to the stretches due to the PCB displacement, the acceleration itself induces a mass/spring/damper behavior on the component, modeled as a 1-degree-of-freedom system with total stiffness calculated as the sum of all n leads kl (Eq.2).The component natural frequency, vibration transfer function and stretch due to the acceleration are given respectively by where mc is the total mass of the electronic component (EC), ξc,long is the lead damper coefficient, adopted as 0.05 (as usual for some metal alloys) and i is the complex number.
Component Body Flexibility to Displacements
In the previous Section, the geometric stretch δcg,mn and the acceleration stretch δca,mn are calculated assuming the component to be a rigid body.However, the component body is flexible and absorbs part of the strain, what relieves the stresses on the leads.The component flexibility calcula- Latin American Journal of Solids and Structures 14 (2017) 2402-2422 tion is not a trivial task, because the lead itself is a flexible structure and the loads are dependent on the component position and on the PCB vibration mode.The EC is supposed to be a rectangle of side lengths c d ´.Each lead, located on the EC boundary at distances lx and ly from the EC center, applies a concentrated load P, as shown in Fig. 5.The EC is modeled as a sum of two distinct beams clamped in the component center, where Rx and Ry are the fractions of the load applied on each direction: Therefore, the total component body displacement (δc) at the lead is the sum of the displacements due to the RxP force (δc,x), and due to the RyP force (δc,y), say , , , , 24 where E is the elasticity modulus, Ix,c and Iy,c are the inertia moment around xc and yc axes (Fig. 5) and tc the component body thickness.The EC inertia moments are calculated considering an internal void of tc/2: where nx and ny are the number of leads on each side of length d and c , respectively.Thus, the total stretch δ can be calculated as the sum of leads and component body stretches Assuming a linear relation between lead stretches δlead,x and δlead,y, with Rx + Ry = 1, the total stretch and consequently the fatc lead stretch reduction factor can be written as ( ) where ( ) , 1 24 24
Lead Total Stretch
The δc,long total lead stretch can be calculated for all PCB vibration modes from Eqs. 10, 13 and 22 , .
Note that the term 1 is divided by φ 2 in order to not account φ 2 times the α input acceleration.
Lead Rotation
The lead rotations are caused by the PCB bending curvature during the vibrations (θPCB or ϕPCB) minus the component body rotation (θc or ϕc) and minus the rotation due to the lead longitudinal stretch (θ2 -see Appendix A).The PCB rotations, which induce torsional and bending moments, are given by the derivative of Eq. 6 by x and y.The rotations necessary to describe the problem are depicted in Fig. 4 and summarized at Appendix A.
Component Body Flexibility to Rotations
The component body flexibility to rotations and moments shall also be applied to the model.The flexibility factor due to rotations differs from the flexibility factor due to forces by the difference in beam stiffness for these two distinct types of loads.The cantilever beam stiffness to moments is then, the flexibility factors for bending and torsional moments assume a slightly different format where fatc is the flexibility reduction factor for force (Eq.22), fatcm1 is the flexibility reduction factor for lead bending rotations (Fig. 2b), and fatcm2 is the flexibility reduction factor for lead torsional rotations (Fig. 2c).In the same way fatc multiplies the lead total stretch (Eq.23), fatcm1 multiplies the lead total bending rotation around y and x, and fatcm2 multiplies the lead total torsional rotation around y and x, as presented on Appendix A (Eqs A.8-11).The inclusion of fatc in Eqs. 25 and 26 is a simplified way to account for component body additional rotation due to the longitudinal stretches.
Lead Stress
The stretches and rotations found on the previous Sections are now used to evaluate the lead stress.
The von Mises stress is directly proportional to the input acceleration where GVM is the transfer function from input acceleration α to the von Mises stress σVM.For random vibrations, the variables are described in Power Spectral Density (PSD) (Lalanne 2002), resulting in , , , , .
Fatigue Damage
The fatigue damage model used on this work is the well-known Miner's rule (Lalanne 2002), where the fatigue damage D is calculated as the relation between the number ni of performed cycles in a prescribed stress, to the number Ni of cycles necessary to produce a failure in the same stress level, as described in Eq. 29.The failure occurs when ni=Ni .
Latin American Journal of Solids and Structures 14 (2017) 2402-2422 The fatigue damage in random vibrations, once obtained the stress response in PSD (Eq.28), is calculated in this work by Dirlik's method (Dirlik 1985), one of the most used in industry and literature.The damage, evaluated with a probability distribution function developed statistically through the use of several random signals and with the Rainflow cycle counting method (ASTM 2011), is calculated as where T is the time, σVM,rms is the root mean square of the von Mises stress, Γ is the Gamma All stresses are directly proportional to the input accelerations.It is possible to affirm that for each PCB vibration mode, the relations between shear and normal stresses are constant.When describing the stresses in PSD, these relations are always constant (in statistical sense) for each frequency band, apart the PCB vibration modes, because all vibration modes are summed (Eq.23).Finally, as the root mean square of the power spectral densities represents a statistical average of the stresses, it is possible to assume that the relations between shear and normal stresses are in average constant.Therefore, the von Mises root mean square (rms) stress can be used to approximate the fatigue damage calculations in random vibrations, as presented on Eq. 30.
RESULTS
The present model is applied to PCB's experimental fatigue tests available on the literature.The results are also compared to Steinberg's method predictions.The experimental results findings discussed in this Section include most of the types of leads, assemblies and solder joints.All materials, dimensions and properties used for each case are presented on Appendix C.
In each system, the model is applied with four distinct configurations in order to clarify the fatigue damage contributors: 1) Complete / 9 modes: the PCB is modeled as a plate with 9 natural vibration modes, and the ECs as double beams with 1 natural mode; 2) Acceleration off: the PCB is modeled as a plate with 9 modes, and the ECs as double beams without the contribution of its natural frequency, expressed by δca,mn (Eq.13); 3) Moment off: the PCB is modeled as a plate with 9 natural modes, and the ECs as double beams with 1 natural mode and not accounting the influence of PCB curvature; 4) Complete / 1 mode: the PCB is modeled as a plate with only 1 natural mode, and the ECs as double beams with 1 natural mode.
Latin American Journal of Solids and Structures 14 (2017) 2402-2422 3.1 Plastic Leaded Chip Carrier (PLCC) and Leadless Ceramic Chip Carrier (LCCC) Liguore and Followell (1995) performed random vibration tests in 8 PCBs with 9 electronic components, 3 of the type PLCC with 68 J-leads and the others of the type LCCC with 32, 68 and 84 pins, as described in Fig. 6.The epoxy fiberglass PCBs (E-Glass) with 152.4 × 152.4 × 1.45 mm 3 were mounted in a shaker with the side close to the components U1, U2 and U3 free.As the analytical model uses a 4-side simply supported plate, the PCB was mirrored about the free edge.The PCBs were vibrated in a band frequency of 50 Hz around the first natural frequency of the board with a constant PSD.The applied levels and PCB's amplifications varied in order to produce results between 30 to 80 Grms.
In the present model, the PLCC leads of copper are modeled with the following geometric parameters: a = 0.08 mm, b1 = 1.02 mm, b2 = 1.02 mm, t = 0.25 mm, l1 = 0.735 mm and l2 = 0.43 mm.The leads made of Pb solder (LCCC) have more or less conical shape and the rupture often appears in the middle of the cone.Thus the LCCC lead is modeled with the b1 = b2 = 0.85/2 mm parts and with a constant diameter given by the average (l1 = l2 = 1.27 mm).The results of the present model are exposed in Tables 1 to 5 and compared with experimental results from Liguore and Followell (1995) and semi-analytical beam model from Steinberg (2000).Liguore and Followell (1995) The influences of acceleration stretch are lower than 11% in the fatigue life, explained by the ECs calculated natural frequencies 200 times higher than the maximum vibration frequency of 175 Hz.The component with the lower calculated natural frequency is the 68 J-lead (U1, U2 and U3) with 31123 Hz.Also, as the geometrical stretch (Eq.10) is the main damage contributor for this case, the regions in the diagonal of the board have lower fatigue lives.
The PLCCs have longer fatigue lives than the LCCCs due to the difference in stiffness.Both components have close fatc factors (Eq.22).However the LCCC components have ceramic bodies and short leads made of solder, which are stiffer and produce more stresses.
Steinberg's results for all EC but U2 are of the same magnitude order of experimental results since the geometrical stretch is the most damaging contributor in these components.However, it is conservative for the EC in the PCB center region (U2).0.63×10 5 not available Liguore and Followell (1995) 2.0×10 5 to 10×10 5 solder Table 5: Cycles to failure for LCCC with 30.6 Grms, 84 pins.
Plastic Ball Grid Array (PBGA)
The Plastic Ball Grid Array (PBGA) components are constituted by plastic body and spherical leads of solder.YU et al (2011) presented random vibration results for PBGA with SAC 305 and SAC 405 Pb free solder joints (lead).The tests were performed in FR4 PCBs of dimensions 100 × 50 × 0.6 mm 3 attached to the shaker through 6 screws, as described in Fig. 7. Two tests were performed for each solder configuration, one with 0.15 g 2 Hz -1 , another with 0.25 g 2 Hz -1 (PSD), both in 40 to 1000 Hz frequency range.joint/lead b1 = 0.2 mm.The second part has a negligible length b2 = 0 and diameter of l2 = 0.22 mm (the smallest diameter of the solder ball), used for stress calculations.
The results are exposed in Tables 6 to 9. As expected from solder leads, the high stiffness of the connection makes the geometric stretch (Eq.10) the main contributor to the fatigue damage.The components calculated natural frequency is 606320 Hz, which is far beyond the tested frequency band.Therefore, the acceleration stretch contribution was minimized.It is also possible to observe the good adherence of the present predictions.
In this case, Steinberg's method presents results in conformance due to the high contribution of the geometrical stretch, the only structural feature modeled by Steinberg.
Plastic Dual in-line Package (PDIP) and Tantalum Capacitor
The Plastic Dual in-line Package (PDIP) components are composed of a plastic body with 14 leads mounted on 2 parallel sides.The Tantalum capacitors are cylindrical components with 2 leads.Genç ( 2006) tested 6 PCBs, 3 fully populated with 24 PDIPs (Fig. 8a) and other 3 fully populated with 30 capacitors (Fig. 8a).Both components were mounted on the boards by through hole solder joints.All the PCBs are of 233 × 15 × 1 mm 3 dimensions, attached to the shaker as shown on Fig. 8. g 2 Hz -1 .The PSD ratio between steps is close to 1.56.Genç (2006) has not achieved failures for the PDIDs.For these components, the test started at step 3 and lasted until 2.5 minutes of step 14, when the shaker aborted.Then, any result higher than 662.5 minutes for the PDIPs lifetime is in agreement with the experimental tests.The capacitors test started at step 1 and failed mainly in steps 3 and 4.
The comparisons between experimental and analytical times to failure are described in Tables 10 and 11.The results presented by Genç (2006) were not discriminated by EC positions on the board.Therefore, it was only possible to compare the ranges of times to failure.
The present analyses performed step 14 without a time constraint, so it could surpass 60 minutes.The PDIPs calculated natural frequency of 55730 Hz is much higher than the vibrated band of 20 to 2000Hz.Thus, the acceleration stretch almost did not contribute in fatigue.The capacitors calculated natural frequency of 990 Hz is inside the vibrated band of 20 to 2000Hz.Therefore, the acceleration stretch (Eq.13) is the most damaging contributor, and the middle region of the PCB is the most critical one for the capacitors.The present results are of the same magnitude order of the experimental ones.
CONCLUSIONS
A novel analytical model is presented to predict the useful life of electronic components under random vibration.The complete/ 9 modes model presented the closest approximation to the experimental results.As this is the most complete model, and seemed to be always more conservative than the others, this is the recommended configuration.The acceleration off model excludes the acceleration stretch term (δca,mn).Its fatigue lives estimation varied from 0%, for stiff lead attachments, to 316% higher than Complete/9 modes model, for the most flexible one (Tantalum).The moment off configuration proved the importance of the PCB bending effects, since it presented in average life estimation values 35% higher than the recommended model, and the increase in fatigue life estimation varied between 0% and 522%.The complete/ 1 mode model results depend on the test frequency range: for a large range, this model is 4% to 14% higher in life to failure estimation than the recommended model, with a maximum of 155% for the Tantalum capacitor.
The lifetime predictions for the full present model are of the same magnitude order of the experimental findings.The obtained results are more accurate than the Steinberg's method and could predict reasonable results for all electronic components studied.Steinberg's method failed mainly in components where EC's natural frequency is relatively low (Tantalum Capacitors).The proposed method calculates the distributed stress along the leads, what allows a workaround in case of fatigue problems.
The objective of creating an accurate model with low computational cost has been achieved.The practical applications show that the model is suitable for design of aerospace vehicle electronic embedded systems, or any other vehicle exposed to dynamic loads.
function, bb and Cb are the S-N variables given by Basquin relation ( , and G1, G2, G3, R and Q are variables defined on Appendix B.
presented their results only for component type.No information about component position was given.So, the results of components U1, U2 and U3 are analyzed altogether.All the present model results are of the same magnitude order of the experimental ones.
Table 10 :
Failure step (minutes to failure) for PDIP.
Table 11 :
Failure step (minutes to failure) for tantalum capacitor. | 5,692.4 | 2017-09-21T00:00:00.000 | [
"Engineering"
] |
Iterative Speedup by Utilizing Symmetric Data in Pricing Options with Two Risky Assets
The Crank–Nicolson method can be used to solve the Black–Scholes partial differential equation in one-dimension when both accuracy and stability is of concern. In multi-dimensions, however, discretizing the computational grid with a Crank–Nicolson scheme requires significantly large storage compared to the widely adopted Operator Splitting Method (OSM). We found that symmetrizing the system of equations resulting from the Crank–Nicolson discretization help us to use the standard pre-conditioner for the iterative matrix solver and reduces the number of iterations to get an accurate option values. In addition, the number of iterations that is required to solve the preconditioned system, resulting from the proposed iterative Crank–Nicolson scheme, does not grow with the size of the system. Thus, we can effectively reduce the order of complexity in multidimensional option pricing. The numerical results are compared to the one with implicit Operator Splitting Method (OSM) to show the effectiveness.
Introduction
The multidimensional Black-Scholes equation is often used to model options written on multiple assets.One of the traditional methods both in practice and research for discretizing multidimensional Black-Scholes equations is the Operator Splitting Method (OSM) [1][2][3].To solve multidimensional Black-Scholes equations, the OSM solves one-dimensional Black-Scholes equations in turn.Thus, it is possible to use a highly efficient tridiagonal matrix solver as in one dimension [2].The OSM converges at first order in time and second order in space if we discretize multidimensional Black-Scholes equations with an implicit central difference method.In one-dimensional cases, in which the option is written on a single asset, the order of convergence in time can be improved to the second order without too many difficulties if one replaces a time integration scheme with Crank-Nicolson.
In multi-dimensions, however, replacing the time integration scheme is not straightforward as in one-dimension.In general, multiple assets are correlated, and thus the multidimensional Black-Scholes equation has corresponding cross partial derivative terms, which do not appear in one-dimensional Black-Scholes equations.Sometimes, these partial derivatives are not calculated and are assumed to be known, which is easy to implement but leads to inaccuracy under high correlation and large volatilities.If an implicit scheme along with OSM is applied to discretize the mixed partial derivatives appearing in the equation, the resulting system becomes no longer tridiagonal and the Thomas algorithm is not applicable.Therefore, another matrix solver or more advanced multidimensional modeling with radial basis functions [4,5] have to be used at the cost of computation time.In practice, however, the Thomas algorithm is indispensable because of its highly efficient nature.Therefore, one avoids fully implicit discretization of multidimensional Black-Scholes equations by replacing the mixed partial derivative terms with known values so that other partial derivative terms can be discretized implicitly.In this way, practitioners could use the OSM with the most efficient matrix solver, the Thomas algorithm, but end up with only first order convergence in time.
Having second order convergence both in space and time is important if highly accurate option values are of concern.A second order convergence in time is helpful for the practitioners if an option has complex payoff structures or parameter adjustment is needed before the maturity.In addition, Greeks (∆, Γ, etc.) are usually calculated in the post-processing stage using the computed option values, and more significant digits on the option price will improve the accuracy of Greeks.However, in practice, the time to obtain one more significant digit on the option price grows exponentially with the first order convergence speed in time.Therefore, it would be natural to try second order schemes such as Crank-Nicolson, BDF-2 [6,7], etc.Each of the second order schemes has its own advantages and disadvantages.We use the Crank-Nicolson scheme, which has second-order convergence in space and time.
A straightforward Crank-Nicolson discretization of the multidimensional Black-Scholes equation produces a system that makes the direct solver unattractive in terms of the computational effort.However, a simple modification to symmetrize the system helps to solve the system more efficiently with an iterative solver.We found that a standard preconditioner for the iterative solver significantly reduces the number of iterations for those problems that we tested after symmetrization.Finally, we compare the computational complexity of the iterative Crank-Nicolson method and OSM.
Nomenclature
We use bold uppercase letters to represent matrix M, bold lowercase letters to denote vector v, superscript' to denote transpose, and superscript (n) to indicate n-th time period.All vectors that are used in this paper are assumed to be a column vector.
Let the price of the derivative V(t, x, y), where x and y are two different asset prices, and V(t, x, y) is the solution of the following two-dimensional Black-Scholes partial differential equation [8,9]: where the domain is defined by {(t, x, y) | t ∈ (0, T ] , x ≥ 0 , y ≥ 0} .In Equation (1), σ i is the volatility of the i-th asset , ρ is the correlation coefficient, and r is the risk-free interest rate.The maturity of the option V is denoted by subscript T. We express the final payoff of the option V in the following form: Big O notation is used to measure the growth rate of algorithm in terms of input size.For example, a function f (N) = O(g(N)) means that there exist positive numbers C and N 0 such that f (N) ≤ C × g(N) for all N > N 0 .
Implicit OSM
Operator Splitting Method (OSM) finds the solution of multi-dimensional version of Equation ( 1) by splitting the differential operator so that the multi-dimensional problem becomes several one-dimensional problems [2].For brevity, let us consider two-dimensional version of OSM.Equation (1) can be rewritten as follows: where Given the final condition V T , we find the solution at previous time, T − ∆t, in two steps: Step (1) Find the solution at T − 1 2 ∆t, by discretizing the equation ∂ t V + L x V = 0 with the given V T : ∆t .
Step (2) Find the solution at T − ∆t, by discretizing the equation ∂ t V + L y V = 0 with the solution found in Step (1) as the given condition: Depending on the size of the time step ∆t, we need to repeat the above Steps (1) and (2) to find the option price at t = 0.In the current presentation, we have only two steps to obtain solution at T − ∆t from time T because we have two operators in Equation (3).For a general m-dimensional problem, we need m steps to obtain a solution at T − ∆t.In each Step, we have to solve the following linear system where M (n) and v (n+1) are known: The above discretization given in Equations ( 5) and ( 6) are not a full implicit discretization because of the partial derivative terms.Note that the mixed partial derivative terms are assumed to be known so that other partial derivative terms can be discretized implicitly.Thus, we should call it semi implicit discretization to be more precise, but, for the rest of this study, we will call the discretization given in Equation (7) an implicit discretization.This implicit discretization is unconditionally stable and has truncation error, O(∆x 2 , ∆t).In addition, the matrix M (n) in Equation ( 7) is tridiagonal (depending on the boundary condition, the matrix M (n) may not be exactly tridigonal; however, it could be converted to tridiagonal [10]).Thus, the system can be effectively solved by Thomas Algorithm [11], which is the main feature of implicit OSM.
Iterative Crank-Nicolson Method
We propose an iterative Crank-Nicolson finite difference discretization of Equation (1) on a general non-uniform grid.Instead of solving a smaller size one-dimensional problem repeatedly, as in the Operator Splitting Method (OSM), we propose to fully discretize the two asset Black-Scholes equation with a Crank-Nicolson scheme.The discretization can be solved efficiently by a GMRES (Generalized Minimal Residual Method) [12,13] solver with preconditioning after symmetrization.
In the following, we use x i and y i for the price of the first and second asset on a (i, j)-th finite difference stencil.We define h i = x i+1 − x i and k j = y j+1 − y j .We approximate the differential operator in the Equation (1) as follows: Applying Equations ( 8)−( 14) to Equation (1), we obtain the following Crank-Nicolson discretization of two-dimensional Black-Scholes equation: where and The symmetrized iterative Crank-Nicolson method for the two asset Black-Scholes equation is described as follows.
Step (1) Get a linear system by discretizing Equation ( 1) with the Crank-Nicolson scheme.The following equation is matrix-vector form of the Equation ( 15) : where Note that the data, L (n) and R (n) , grows quadratically in terms of the grid points.In addition, the system in Equation ( 16) is non-symmetric.
Step (2) Apply appropriate boundary conditions to the L (n) and R (n+1) .Depending on the option type, the boundary condition is either given as a linear boundary condition on the truncated interface or an essential boundary condition where the price of the option is zero.We denote the boundary condition imposed system as follows: Step (3) Symmetrize the system given in Equation ( 19) as follows: where L bc is the transpose of L bc (n) .
Step (4) Create preconditioned matrix P with L bc (n) L bc (n) using incomplete LU factorization (where LU stands for lower and upper triangular matrix).The choice of the preconditioner is more important than the choice of the Krylov iterative method such as GMRES [14,15].The effectiveness of preconditioner P created by incomplete LU factorization is measured by how Step (5) Solve Equation (20) repetitively using GMRES with the preconditioner P and the previous solution vector v (n+1) as an initial guess until we find the option price v (0) .Use the final condition v (N) = V T = V(T, x i , y j ) to start the iteration.We use the following split preconditioning with the incomplete LU factors, P −1 L and P −1 R : The previous solution vector, v (n+1) , is used as an initial guess of v (n) to solve the Equations ( 21) and ( 22).
Thus far, we have explained the general procedure of the iterative Crank-Nicolson method for two asset Black-Scholes equations.The idea can be extended to the three asset Black-Scholes equation: where {(t, x, y, z) | t ∈ (0, T ] , x ≥ 0 , y ≥ 0 , z ≥ 0} .The difference with two asset case is that we have four additional terms to discretize.The discretization is essentially the same with different indices.Thus, we obtain equations that are similar to Equation ( 15) but have 27 terms on each side instead of nine terms.In vector notation, the procedure given above for Steps (1) to ( 5) is the same for the three asset case.
The oscillations in the solution due to non-smooth initial data are a well-known drawback of the Crank-Nicolson method.Thus, if the final condition of the given option has non-smoothness, the computational grid can be prepared so that the option strike price agrees to one of the midpoints in the grid or the v (n+1) in Equation ( 16) can be replaced with ṽ(n+1) by a simple moving average-for example, on a uniform grid, we can use the following equation: (24)
Computational Perspective of the Iterative Crank-Nicolson Method
We will demonstrate the iterative Crank-Nicolson method with some European style options with two assets.In the following, we study computational cost and order of convergence of the iterative Crank-Nicolson method.
Some Numerical Examples
All numerical computations in this section are performed on the finite domain [0, 300] × [0, 300].The relative error (%) in the maximum norm is calculated by the following equation: where u app. is the computed numerical solution and u ref.
is the reference solution.We present numerical results with the following three different final payoffs V T and reference solution V ref [16,17].M is the bivariate normal distribution and K is the strike price.The payoffs in Equations ( 26), ( 28) and (30) are carefully chosen so that (1) V T in Equation ( 26) is symmetric and discontinuous; (2) V T in Equation ( 28) is non-symmetric and discontinuous; and (3) V T in Equation ( 30) is symmetric and continuous: (1) Cash or Nothing with parameters: ρ = 0.5, r = 0.02, K = 75, σ 1 = 0.15, σ 2 = 0.2, T = 1.0.Figure 1a,b shows V T and V ref , respectively.
(3) Basket V ref is approximated with the formula given in [18] with parameters: ρ = 0.5, r = 0.02, K = 150, We use sparse storage for all matrices to hold the data throughout the test.In addition, the drop tolerance is set to 10 −7 in the incomplete LU factorization to create preconditioner and 10 −8 is used for GMRES stopping tolerance.We observed that the solution is reached within two iterations for each time step in all examples that we considered.These tolerances show dependency on the grid size for the problems that we considered and could be optimized, but we did not investigate the effect, as these parameters gave us an accurate numerical solution.Before we proceed to test the iterative Crank-Nicolson method for different payoffs, we show the symmetrization effect, Equation (20).The comparison between non-symmetrized system, given in Equation (19), and the symmetrized system is shown in Figure 2. The cash or nothing payoff is used with those parameters given in Equation ( 27).The slope for both symmetric and non-symmetric case is approximately constant, which suggests that the number of iteration per time steps and the number of iteration per spatical discretization does not grow as grid refinement.Computed solution with the iterative Crank-Nicolson method and its errors for three different options are shown in Figure 3.We can observe there is no oscillation in the computed solution with non-smooth payoffs.The result of numerical tests is summarized in Table 1.The ratio column shows the dropping rate of relative error for both methods.Note that the number of time steps required for OSM is significantly larger to maintain the same level of accuracy compared to the iterative Crank-Nicolson method.For a coarse grid, note that the time required to obtain the same level of accuracy for OSM is shorter than the iterative Crank-Nicolson.In the beginning, when the grid size is still coarse, the iterative Crank-Nicolson scheme takes more time to precondition the system than to actually solve it.However, the iterative Crank-Nicolson method reached the accuracy level faster than OSM as the grid becomes finer.Table 1 suggests that the iterative Crank-Nicolson method becomes more favorable in time if the accuracy is of concern.
We further compare both methods while keeping the ∆x/∆t ratio constant.The results are summarized in Table 2.We see that the iterative Crank-Nicolson method needs about half of the time that is required for OSM to reach the same level of relative error.However, the memory consumption of the iterative Crank-Nicolson is significantly larger than OSM.
Computational Cost
We compare the computational complexity of the iterative Crank-Nicolson method and implicit OSM.Throughout this section, we use N and M for the number of discretization steps for space and time, respectively.Both methods have an order of complexity O (MN 2 ) and storage requirement O (N 2 ), while the former has a second order convergence rate both in space and time and the latter has a second order convergence rate in space and a first order convergence rate in time.Theorem 1.The implicit OSM for the two-dimensional Black-Scholes equation has an order of computational complexity O (MN 2 ).
Proof.Suppose we partition the space and time domain with (N − 1) and (M − 1) intervals.For each time slice, we have to solve 2N (N times for each xand y-direction) tridiagonal systems with the Thomas algorithm, which requires O (N) computational cost.In other words, 2N 2 operations are needed to solve the system of equations for a fixed time period, and we have a total of M time steps.As a consequence, the total computational cost to solve Equation (3) becomes O (MN 2 ).
With O (MN 2 ) computational complexity, the implicit OSM achieves second order convergence in space but first order convergence in time.The computational demand for implicit OSM increases to O (M 2 N 2 ), if we increase the number of time steps to M 2 to obtain an overall second order convergence rate in error defined in Equation (25). Figure 4a-c shows the growth rate of computational complexity measured in time for implicit OSM, where M was chosen to be the same as N.The slope in Figure 4 supports the fact that implicit OSM, which has a second order convergence rate in terms of maximum norm, has computational complexity O (M 2 N 2 ) = O (N 4 ).Proof.Let us assume that we partition the space and time domain with (N − 1) and (M − 1) intervals.Then, we have to solve the system given in Equation ( 16) of size N 2 , for which the preconditioning and GMRES solver in Equations ( 21) and (22) require O (N 2 ) computation for each time slice.Therefore, total computational cost to solve Equation (1), using the implicit Crank-Nicolson method is O (MN 2 ).
The growth rate shown in Figure 4d-f supports the iterative Crank-Nicolson method having computational complexity O (MN 2 ) = O (N 3 ) when M was chosen to be the same as N.
Thus far, we have seen how the computational complexity grows for the iterative Crank-Nicolson method.Memory consumption is another factor that one should consider for the actual computation.Theorem 3. The memory requirement of the fully discretized Crank-Nicolson method is O(N 4 ) for the two-dimensional Black-Scholes equation.
Proof.The fully discretized Crank-Nicolson method, Equation ( 16), requires holding P L , P R , L (n) , R (n+1) in Equations ( 21) and ( 22) for a fixed (n).The exact dimension of these matrices is N 2 × N 2 , requiring approximately N 4 amount of space when fully populated.However, P, L (n) , R (n+1) are all sparse matrices so that we can relax the storage requirement to N 2 by storing only the non-zeros.The actual number of non-zeros are on the order of N 2 , not N 4 .The ratio of non-zeros in the matrix, P L , P R , L (n) , R (n+1) , are similar.To see the growth of non-zeros in these matrices in terms of N, see, for example, Figure 5.The non-zero terms in sparse matrix (a) L (n) , (b) R (n+1) , and (c) P grow quadratically in number of grid points.The dotted line shown in Figure 5 has slope 2 and shows us that the actual storage requirement can be reduced to the order of N 2 from N 4 .Proof.To solve the tridiagonal system given in Equation ( 7), we only need 4N spaces in memory.However, to do this, we have to hold the data v (n+1) .Since the entire computational domain is a grid of size N × N, N 2 amount of storage is needed in memory to hold v (n+1) throughout the time period, ∆t, to obtain v (n) in Equation (7).Therefore, the amortized cost is on the order of N 2 .
Before closing this section, we summarize the computational complexity and memory requirement for the implicit OSM and iterative Crank-Nicolson method.The former is first order, and the latter is second order method in time.Both have the computational complexity of O (MN 2 ).However, implicit OSM has O (N 2 ) memory requirement and iterative Crank-Nicolson has O (N 4 ) memory need for the two-dimensional Black-Scholes equation.Therefore, the growth of the data can not be compared for large N but, after symmetrizing and using a sparse storage structure, we can reduce the storage need to O (N 2 ) for the iterative Crank-Nicolson method, which is worth the effort.
Order of Convergence in Space and Time
The numerical solution obtained by implicit OSM has a second order convergence rate in space and a first order convergence rate in time because the truncation error of the finite difference discretization given in Section 3 are all in second order and first order in time (see, for example, Figure 6a-c).The relative error in maximum norm for implicit OSM drops with slope one, which means it is first order in time.On the other hand, the iterative Crank-Nicolson method has a second order convergence rate both in space and time.The slope of the relative error curve, shown in Figure 6a-c, support the proposed method that has a second order convergence rate in time for all three options that we have considered while the implicit OSM converges first order in time.The second order of convergence can be proved by calculating the truncation error.The truncation error of iterative Crank-Nicolson discretization with non-uniform grid size is indeed second order in space and time (see Appendix in [19]).
Conclusions
A second order method is essential for pricing options when highly accurate option price is needed in time.The second order convergence even in the simplest case, a European style, is of great importance in practice when the option has complex payoff structures.For example, a multidimensional Black-Scholes equation has to be solved many times in a row to price and hedge an equity-linked security.A three-dimensional extension of the current study would be an interesting and important future work.A straightforward implementation of the multi-dimensional Crank-Nicolson scheme could be thought inefficient.However, with a simple symmetrization and a standard preconditioner, we have found that the order of complexity to solve the system is about the same for the fully discretized iterative Crank-Nicolson scheme and (semi-)implicit Operator Splitting Method.In other words, the order of complexity for the first and second order method turned out to be the same.However, note that the second order method, the iterative Crank-Nicolson method, needs more storage compared to the first order method, OSM, but reaches the same level of error in significantly less time.It is interesting to observe the trade-off between storage and computational time in the context of option pricing.
Figure 1 .
Figure 1.The surface of option payoff V T and V ref : (a) cash or nothing V T ; (b) cash or nothing V ref ; (c) two asset call V T ; (d) two asset call V ref ; (e) basket call V T ; and (f) basket call V ref .
Figure 2 .
Figure 2. The effect of symmetrization: (a) number of time steps (M) versus total iterations; and (b) number of spatial discretization versus total iterations.
Theorem 2 .
The iterative Crank-Nicolson method for the two-dimensional Black-Scholes equation has the computational complexity O (MN 2 ).
Figure 4 .
Figure 4.The order of complexity measured in time for implicit OSM with a different payoff: (a) cash or nothing; (b) two asset call; (c) basket call.The order of complexity measured in time for the iterative Crank-Nicolson method with a different payoff: (d) cash or nothing; (e) two asset call; and (f) basket call.The dotted line shows the growth rate.
Theorem 4 .
The memory requirement of the implicit OSM is O (N 2 ) for the two-dimensional Black-Scholes equation.
Figure 5 .
Figure 5. Number of non-zeros in the matrices, given in Equation (16), versus N, the number of grid points: (a) L (n) ; (b) R (n+1) ; and (c) P. The quadratic growth rate is shown with the dotted line.
Figure 6 .
Figure 6.The relative error measured in maximum norm versus the time step size for: (a) cash or nothing option; (b) two asset call option; and (c) basket call option.
Table 1 .
Computational time comparison between iterative Crank-Nicolson and Operator Splitting Method (OSM) to reach targeted accuracy while maintaining convergence rate.
Table 2 .
CPU time, Memory, and Relative Error comparison with the use of identical ∆x and ∆t for both methods.The memory and CPU time are accumulated for the entire simulation. | 5,587.4 | 2017-01-21T00:00:00.000 | [
"Computer Science"
] |
Ensemble learning-based approach for automatic classification of termite mushrooms
Termite mushrooms are edible fungi that provide significant economic, nutritional, and medicinal value. However, identifying these mushroom species based on morphology and traditional knowledge is ineffective due to their short development time and seasonal nature. This study proposes a novel method for classifying termite mushroom species. The method utilizes Gradient Boosting machine learning techniques and sequence encoding on the Internal Transcribed Spacer (ITS) gene dataset to construct a machine learning model for identifying termite mushroom species. The model is trained using ITS sequences obtained from the National Center for Biotechnology Information (NCBI) and the Barcode of Life Data Systems (BOLD). Ensemble learning techniques are applied to classify termite mushroom species. The proposed model achieves good results on the test dataset, with an accuracy of 0.91 and an average AUCROC value of 0.99. To validate the model, eight ITS sequences collected from termite mushroom samples in An Linh commune, Phu Giao district, Binh Duong province, Vietnam were used as the test data. The results show consistent species identification with predictions from the NCBI BLAST software. The results of species identification were consistent with the NCBI BLAST prediction software. This machine-learning model shows promise as an automatic solution for classifying termite mushroom species. It can help researchers better understand the local growth of these termite mushrooms and develop conservation plans for this rare and valuable plant resource.
Introduction
Termitomyces mushrooms are a type of mushroom that nature has gifted us, known for their high nutritional value and delicious taste (Pegler, 1994).In addition to its high nutritional value, this termite mushroom is also known for its medicinal properties in many countries around the world.Termitomyces mushrooms have antibacterial properties, such as and Proteus vulgaris (Giri, 2012).Termitomyces clypeatus also supports the treatment of chickenpox (Dutta and Acharya, 2014).The valuable compounds of these rare and valuable mushroom species are obtained through biomass cultivation (Lu et al., 2008) cultivated Termitomyces albuminosus to test its efficacy in pain reduction and antiinflammation while Termitomyces striatus was used for other extracted compounds.Termitomyces heimii and Termitomyces microcarpus are used in the treatment of fever, colds, and fungal infections and in promoting cancer therapy (Venkatachalapathi and Paulsamy, 2016).There are about 30 species of Termitomyces mushrooms worldwide, and 10 species in Vietnam, with Termitomyces clypeatus and Termitomyces microcarpus being common in Binh Duong.Although very effective economically, the natural yield of these mushrooms is declining significantly, and they have not yet been cultivated sustainably, as they only grow seasonally.
Correctly identifying the name of a termite fungus species is an important task in biological research.Experts use traditional methods to classify and identify termite fungi based on their morphology.The overall structure of a termite fungus includes a cap, flesh, membrane, and stem, which may have rings and boxes (Mossebo et al., 2009).However, fungal structures vary from species to species, especially when mutations occur.Moreover, identifying samples lacking morphological characteristics can be difficult (Roe et al., 2010).A method for identifying new species of organisms that are often used to identify edible and medicinal mushrooms is based on molecular techniques.In this approach, molecular techniques such as DNA barcoding have been successfully used in recent years to identify species (Hebert et al., 2003;Somervuo et al., 2016).These molecular methods are based on analyzing genetic markers and have proven to be highly effective in identifying species, especially when combined with traditional morphological methods.Overall, incorporating molecular techniques into the identification process of termite fungi can provide more accurate and efficient identification, especially in cases where traditional morphological methods fall short.
One commonly utilized gene group in molecular identification is the group that encodes rRNA.This group is highly effective for finding similarities and differences when comparing different organisms due to the relatively conserved nature of most rRNA molecules (De Peer et al., 1996).For fungi, the rDNA ITS (Internal Transcribed Spacer) region, which includes two sequences, ITS1 and ITS2, flanking the 5.8S sequence, is widely accepted as the molecular region for species identification by most mycologists (Kõljalg et al., 2013), as shown in Figure 1.The ITS region is also used for predicting fungal species using machine-learning.This approach involves using the ITS sequence data to train a machine-learning model, which can then be used to accurately classify and identify different fungal species automatically.By combining molecular techniques such as machine-learning with traditional morphological identification methods, researchers can achieve more accurate and efficient identification of fungal species, aiding in both research and conservation efforts.
The ITS sequence data for fungi can be accessed from two major datasets, BOLD (Barcode of Life Data) and the National Center for Biotechnology Information (NCBI).Both contain a vast collection of ITS sequences for all fungal species.Machine learning-based classification of fungal species using ITS sequences has been proposed by several researchers, including (Schloss et al., 2009;Schoch et al., 2012;Delgado-Serrano et al., 2016;Deshpande et al., 2016;Edgar, 2016;Meher et al., 2019;Das et al., 2023).A comprehensive list of the techniques and data used in fungal classification studies is provided in Table 1.
The mentioned studies have successfully utilized supervised machine-learning techniques such as Naive Bayes classification, kNN, and Bayesian regression models for classifying fungal species.However, only (Delgado-Serrano et al., 2016) identified the fungal species at the genus level, while other studies only determined the species names.As ITS sequence data from the NCBI GeneBank were used, this data is not sufficient for identifying the labels of termite fungi found in these GenBank.For example, the ITS sequence of the termite fungus genus Termitomyces euripus in the NCBI GeneBank has only one sequence, while there are six labels for this fungal genus in BOLD.Additionally, the lengths of ITS sequences vary widely, ranging from 200 bases to 2000 bases, and the number of sequences between fungal genera varies greatly, from one to 500 sequences.Due to these limitations with ITS data for termite fungi, classical machine-learning algorithms struggle to accurately classify the labels of termite fungi.Our study focuses on identifying the labels of termite fungal genera using ITS sequence data collected from both the NCBI GeneBank and BOLD GenBank.The K-mer technique and natural language processing (NLP) were combined to extract features, and modern classification methods such as XGBoost (Extreme Gradient Boosting), Random Forest, and CatBoost are experimented with to build an automatic termite fungal species classifier.The proposed research is structured as follows: the method presents the concepts related to ITS sequence data, feature extraction techniques, the overall proposed model, experimental results, and finally, the study's conclusion.
ITS sequence data
Termite fungi are valuable but endangered, and urgent research and conservation efforts are needed.However, data on ITS sequences for termite fungi in GenBank are incomplete, making it crucial to synthesize data from different sources.In this article, ITS sequence data from two GenBank, BOLD and NCBI, was compiled by us.Specifically, 101 ITS sequences were obtained from BOLD, with the number of sequences for each genus ranging from 1 to 12.At NCBI, 1740 ITS sequences were obtained, with the number of sequences for each species ranging from 1 to 799.After synthesizing the ITS sequence data from these two GenBank and removing termite fungal species with fewer than 7 sequences, 1704 sequences belonging to 17 termite fungal species were obtained.The labels of each termite fungal species are presented in detail in Table 2.This data can be used for further research and conservation efforts for these valuable and endangered fungi.
The ITS region of termite mushrooms collected from Binh Duong province, Vietnam, was sequenced, and the resulting The ITS sequences region (White et al., 1990).
sequences have a length ranging from 669 to 1050 base pairs.These termite mushroom samples have a morphology similar to that of Termitomyces clypeatus, Termitomyces microcarpus and Termitomyces striatus.The sequence data for these eight termite mushroom samples have been published and stored in the NCBI GeneBank.For more detailed information about these termite mushroom samples, please refer to Table 3.
Feature generation
The extraction of features from biological sequences is a crucial step in computational biology.Biological sequences are typically composed of a string of letters, which must be converted into numerical vectors before they can be utilized in machine-learning algorithms (Kamath et al., 2014).The K-mer feature technique has been employed to represent information for ITS sequences to classify species based on barcodes, as demonstrated by previous studies (Schloss et al., 2009;Deshpande et al., 2016).In 2016, Delgado-Serrano utilized K-mer encodings to transform ITS sequences into numerical vectors.The accuracy of the prediction model was affected by the size of the K-mer utilized (Delgado-Serrano et al., 2016).In our proposed approach, a combination of K-mer and CountVectorizer techniques was employed to encode ITS sequences into numerical vectors.An illustration of the methodology utilized to digitize sequence information is presented in Figure 2.
In Figure 2, The process of digitizing ITS sequences has been illustrated.This process is similar to that of using Natural Language Processing (NLP) tools from Sklearn to convert our K-mer words into numerical vectors.These vectors, which represent the count of each K-mer in the vocabulary, have the same length as unigrams.
Ensemble learning
Supervised machine-learning techniques are widely used in computational biology to solve various problems.Several traditional machine-learning algorithms such as k-nearest neighbors, Naïve Bayes, and decision trees have been successful in identifying mushroom species based on barcode data (Schloss et al., 2009;Delgado-Serrano et al., 2016;Deshpande et al., 2016).However, these models have relatively low accuracy.In our research, two solutions were tested: i) The first set utilized well-known classification methods like Naïve Bayes and Random forest to predict the names of termite mushroom species; ii) In the second, automated models for predicting termite mushroom species with higher accuracy were built by us using Ensemble learning algorithms such as XGBoost and CatBoost.
Gradient-boosted decision trees (GBDTs)
Gradient Boosting Decision Trees (GBDT) (Friedman, 2001) is a method that uses decision tree ensembles to predict target values.A GBDT is constructed by splitting observations based on the attribute values of the input data.The model can find the best way to divide data and determine the most time-consuming part of the partitioning process.To build a GBDT model with T trees from a dataset consisting of n samples, the prediction process according to the GBDT method is as follows: where ŷ(K) i is the predicted value of the i th sample at the k th iteration The cost function of GBDT has two parts: a training error and regularization, as follows: Illustrate the use of the K-mer method to encode ITS sequences into numeric vectors, in the example the size of K-mer was 7.
Frontiers in Genetics frontiersin.org04 Duong et al. 10.3389/fgene.2023.1208695Cost where Ω(f k ) γT + 1 2 λ w 2 , ∀k 1, K. T is the number of leaf nodes, w is the score for a leaf node, γ is the leaf penalty coefficient, and ensures that leaf nodes' scores are not too large.
CatBoost
CatBoost is an algorithm used to boost gradients on decision trees.It is used to process datasets with a large number of input features of the categorical data type (Prokhorenkova et al., 2018).In the field of computational biology, CatBoost has been applied for various purposes such as identifying bacterial genes at the 16S rRNA level (Meharunnisa and Sornam, 2022) or building a feature extraction package for DNA, RNA, and protein sequences (Robson, 2022).Our proposals have used the CatBoost algorithm to build a model for termite mushroom species classification.
XGBoost
XGBoost is a powerful machine-learning algorithm that builds upon the initial gradient-boosting machine (Friedman, 2001;Chen et all., 2015), is an upgraded version of gradient boosting that boasts many superior improvements (Ren et al., 2017;Jiang et al., 2019); (Zhong et al., 2018).These improvements, achieved through parallel computation on different datasets, have significantly increased processing speed, making XGBoost up to 10 times faster than GBM.XGBoost has been successfully applied in many fields, including computational biology.
Building the best classifier base on ensemble learning
CatBoost is a viable option for gene sequence data analysis, as indicated by recent research (Robson, 2022).In our experiments with termite mushroom data, it was observed that CatBoost performed comparably to XGBoost in terms of prediction accuracy.However, a relatively longer training time is required by CatBoost than that of XGBoost to achieve a similar level of performance.Therefore, XGBoost was chosen as the primary algorithm for our prediction model.
The XGBoost model's performance depends on several key parameters such as 'max_depth', 'gamma', 'n_estimators', and 'learning_rate'.These parameters are known as hyperparameters and can be adjusted manually during training or automatically.The proposed enhanced model uses the Bayesian Optimization technique (Klein et al., 2017) specifically Random search, to tune the hyperparameters.Bayesian Optimization was applied to tune the four main parameters of the XGBoost classifier: 'max_depth', 'gamma', 'n_estimators', and 'learning_rate'.
To improve the predictive performance of the model, crossvalidation with k = 5 was performed to select the best classification model, in addition to using Bayesian Optimization to tune hyperparameters.A new dataset, which consisted of n data samples and m features, was obtained from the results of phase 1.An optimization parameter was then used as input for Algorithm 1 to build an optimal classification model.
Building a model for predicting the termite fungus species name
Our study has developed an automated process consisting of four stages to predict the species name of a new termite fungus.The first stage involves collecting termite fungus data from ITS gene sequence repositories.In the second stage, sequence features are extracted and encoded.The third stage involves building a classifier by constructing and tuning parameters to find the optimal classifier.Finally, in the fourth stage, the classifier is used to predict new termite fungus samples.Figure 3 provides a detailed description of this process.
Performance metrics
In our study, the terms "true" and "false" predictions can arise from the model's misclassification or failure to predict accurately, such as false negatives or false positives, or other concepts applied to the prediction targets.Specifically, the phrase "predicting the species of Termitomyces" is referred to as a true positive (TP), while the phrase "correctly excluding the species of Termitomyces" is referred to as a true negative (TN).On the other hand, the phrase "predicting the species of Termitomyces incorrectly" is designated as a false positive (FP), and a "missed or misclassified prediction" is considered a false negative (FN).These conditions are utilized as stopping points during initial data training.To evaluate the performance of our proposed model, various methods were applied to assess its machine-learning abilities on DNA sequence data (Gupta, P., et al., 2021).These methods include the following: ❖ Accuracy: The proportion of correctly predicted cases is known as accuracy, and it can be calculated using the following formula: Accuracy TP + TN TP + TN + FP + FN ❖ Sensitivity: Recall (pr) was the hit rate (hit rate), and the true positive rate (TPR) was the ratio of correct positive classifications to the total number of positive and recall cases and it can be calculated using the following formula:
FIGURE 4
Accuracy of machine-learning algorithms according to the K-mer sizes.It was found that different predictive results (accuracy) were produced by each algorithm for each K-mer size such as: Catboost produced results ranging from 0.87-0.88,XGBboost had results from 0.88-0.91,and RandomForest yielded results from 0.87-0.89.However, the Naive Bayes (MultinomialNB) model had the lowest accuracy, ranging from 0.59-0.61. Figure 5 presents detailed information on the impact of K-mer size on the ROC Curve (AUCROC) achieved by each algorithm.Notably, the XGBoost algorithm achieved the highest classification AUCROC when the K-mer size was set to 7, which was also used to build the automated model for predicting mushroom species names.
FIGURE 5
Accuracy of machine-learning algorithms according to the K-mer sizes.
FIGURE 6
The ROC curve using the OvR macro-average for each class in the XGBoost method by size K-mer = 7.
Frontiers in Genetics frontiersin.org07 ❖ Receiver operating characteristics (ROCs) were used to calculate the model's classification performance in the condition of unbalanced data set classes.A ROC curve was produced for each pair (TPR, FPR) for different thresholds, with each point on the curve representing one pair (TPR, FPR) for one threshold.This curve shows us the relationship between the True Positive Rate (TPR) and the False Positive Rate (FPR).The ROC Curve and the ROC AUC score are important tools for evaluating binary classification models.
To evaluate multi-class classifiers, the OvR (One vs. Rest) technique was used, which compares each class with all other classes simultaneously.In this case, one class was chosen to be the "positive" class, while all other classes (the remaining part) were considered "negative" classes.In the experiment, the last label class 16 was selected as the "positive" class and the remaining classes were considered "negative".In this way, the multi-class classification output was reduced into binary classification, allowing the utilization of all known binary classification metrics to evaluate the classification model.
Result of each stage in the proposed process
In the experimental process, Python 3.9 and the libraries Scikitlearn, Biopython, XGBoost, CatBoost, and Bayesian optimization were employed to construct a mushroom classification model following the proposed process depicted in Figure 3 Frontiers in Genetics frontiersin.org08 Duong et al. 10.3389/fgene.2023.12086953.2 Select the appropriate K-mer sizes for the classifiers The accuracy of predictive models based on sequence data is significantly impacted by the size of K-mers (Delgado-Serrano et al., 2016).To explore this impact, a study was conducted using different K-mer lengths, which resulted in varying classification accuracies.The sequence in Figure 3 was used to build a classifier, with machinelearning algorithms such as Naive Bayes (MultinomialNB), RandomForest, XGBboost, and The classifier's results for each K-mer size are presented in Figure 4. We found that each algorithm produced different predictive results (accuracy) for each K-mer size.Specifically, Catboost produced results ranging from 0.87-0.88,XGBboost had results from 0.88-0.91,and RandomForest yielded results from 0.87-0.89.However, the Naive Bayes (MultinomialNB) model had the lowest accuracy, ranging from 0.59-0.61.The classifier's results for each K-mer size are presented in Figure 5.
Figure 6 presents detailed information on the impact of K-mer size on the highest accuracy achieved by each algorithm.Notably, the XGBoost algorithm achieved the highest classification accuracy when the K-mer size was set to 7, which was also used to build the automated model for predicting mushroom species names.
Performance analysis in other machinelearning algorithms
Apart from using accuracy as a measure of the classification model's performance, other metrics such as precision, recall, F1 score, or AUCROC are also utilized to evaluate the classifiers' performance.A summary of the performance of the surveyed machine-learning algorithms is presented in Table 4.
Furthermore, the AUCROC for each class was calculated using the ROC curve method with the OvR macro-average for the multiclass model utilized (Pedregosa et al., 2011).In this study, the last class (class 16) was designated as the positive class, while all other classes were considered negative classes.The visual representations of each class's results are presented in Figure 6.
Comparative analysis for prediction of fungal species
Previous models for predicting fungal species accuracy have been evaluated using the K-mer method and machine-learning techniques such as k-Nearest Neighbor, Naïve Bayes, and Random Forest, with results presented in Table 5.Our proposed approach demonstrates superior performance when utilizing a K-mer size of 7 with the XGBoost classification algorithm.Table 5 presents a comparison of various classifiers' performance for predicting fungal species using ITS sequences.
Compare the prediction results of the proposed model with the results of BLAST
The ITS sequences of termite fungi collected from Binh Duong province, Vietnam, were published on NCBI and are detailed in Table 3.Our proposed classification model predicted species identification with comparable results to those obtained from NCBI.For instance, sequences MF163150-BD1, MF163151-BD2, and MF163147-BD7 were identified as the same species as those on NCBI.Moreover, the species identification of MF163149-BD4 was consistent with the identification on NCBI.However, for MF163445-BD3, MF163446-BD6, and MF163149-BD4, the identification was previously unknown or unclear.Our proposed classification model successfully identified MF163445-BD3 and MF163446-BD6 as Termitomyces striatus, consistent with the type strain of the collected fungi.The results for MF163149-BD4 were also consistent with the species identification on NCBI.Table 6 presents the details of the species identification results.
Accurately identifying new species is crucial for studying biodiversity and formulating conservation policies for endangered species (Van Velzen et al., 2012).Traditional methods of species identification based on physical characteristics can be difficult, prompting the use of DNA barcoding as an alternative approach (Hibbett et al., 2011).In this study, a novel computational method is proposed that utilizes K-mer techniques and NLP vectorization to convert DNA barcode sequence data into digital features.The XGBoost algorithm is then employed to build a model capable of predicting termite mushroom species using the ITS sequence as a DNA barcode.
The performance of the developed model was evaluated on 1704 sequences of 17 mushroom species obtained from two ITS GenBanks.The evaluation was conducted using standard classification metrics such as accuracy, precision, recall, F1-score, and AUCROC.
Our proposed model was assessed by comparing its predictions with the species identification results on NCBI, demonstrating complete consistency with the identified species of the ITS sequences of mushrooms, as well as predicting the species names of two sequences that had not previously been identified.An example of this is the Termitomyces striatus mushroom specimen found in Binh Duong province, Vietnam, which was correctly identified by our proposed model.Furthermore, when compared to four other research groups' machine-learning models for predicting termite mushroom species names, our proposed model achieved an accuracy of 0.91 and an average AUCROC score of 0.99, demonstrating its efficacy in species identification.These results suggest that our proposed model is a valuable tool for identifying termite fungi species in Binh Duong province, Vietnam, and could be applied to other mushroom species as well.
Conclusion
This study presents a computational model to predict termite fungus species based on DNA barcodes.The paper also introduces a new method for creating features based on K-mer techniques, NLP vectorization to digitize sequence data, and an optimized classifier.The results showed that the model was evaluated based on the standard classification systems' measures, including accuracy, precision, recall, f1-score, and AUCROC.The model was evaluated on 17 termite mushroom species and achieved high accuracy when compared with species identification results on NCBI.These results suggest that the proposed model can be an effective tool for identifying termite mushroom species based on DNA barcodes.Furthermore, the proposed method can also be used to predict other species.
:
max_depth': int(max_depth), 'gamma': Gama, For i=1 each KFold • Divide the D Ter dataset into D Train and D Test • Train the model based on early-ending hyperparameters 5: Calculate the roc_auc_score, accuracy_score, precision_score, recall_score, and f1_score over iterations 6Select the best model based on Step 4 7: Visualize the mean value from Step 4 8: Return the Best_Model from Step 4 End Algorithm 1. Building the best XGBoost classifier.
❖
Specificity: True negative (TN) (or specificity in clinical medicine) was the correct exclusion rate out of the total number of negative cases, it can be calculated using the following formula: Specificity TP TP + FP ❖ False Positive Rate/Fallout (FPR) was an expression of the rate of mislabeling of negative to positive samples across all negative samples, it was calculated by the following formula: FPR 1 − specificity 1 − TP TP + FP ❖ Precision: Since the dataset had a larger sample, this led to an imbalanced input dataset for the prediction model.Therefore, we used precision to determine the ratio of actually positive cases to the total number of cases labeled "positive" by the model.Precision is a term that refers to the "deterministic" or accurate positive classification of a model: Precision TP TP + FP ❖ F1 score: This was defined as the harmonic mean between precision and recall: F1 2 x Precision x Recall Precision + Recall
FIGURE 3
FIGURE 3 Detailed model of the proposed method.(A) Collected Data: The study collected a total of 1796 ITS sequences of mushroom fungus from GenBank NCBI and BOLD.After filtering out termite fungus species sequences with less than 10 sequences, the final count of ITS sequences was 1704.(B) Data Preprocessing: The ITS sequences were split into smaller sequences, following the rules described in Figure 2, using K-mer with a size of 7. The longest ITS sequence was 2470 bases, corresponding to a vector length of 14425 when encoded.(C).Training: The training process used an 80:20 split ratio and employed hyperparameter optimization for the training model.The model was optimized using the k-fold Cross-Validation technique with k = 5, and BayesianOptimization was performed to fine-tune the following parameters: 'max_depth':(5,10), 'gamma': (0,1), 'learning_rate':(0,1), 'n_estimators': (100,400).The model with the highest accuracy was selected for the classification.(D) Prediction: Mushroom samples collected in Binh Duong Province, Vietnam, and downloaded from NCBI were used as the test set.These samples were subjected to K-mer with a size of 7 and then CountVectorizer was applied.Finally, the best model from stage c was applied to predict the species of new termite fungi.
. The results each stage a, b, c, d are During stage a: Data was collected through the following steps: (a.1) data from the NCBI BOLD GenBanks, which yielded 1740 of 28 mushroom species; (a.2) selecting 17 species that had at least 10 sequences per species.❖ During stage b: Data preprocessing was performed in two steps: The ITS sequence strings were separated by applying K-mer with a length of k = 7, and then the ITS sequences were converted into numerical data by vectorizing them, and the data labels were also converted into numerical values The section provides details on the number of classes and corresponding data.❖ During stage c: The best prediction model was built, consisting of (c.1) a classification model and (c.2) an optimized set of hyperparameters.❖ Finally, during stage d: The performance of the proposed model was displayed in step (d.1), while the predictions of eight ITS sequences collected in Thu Dau Mot, Binh Duong province were shown in step (d.2).
TABLE 1
Relevant works that used machine-learning based on ITS dataset.
TABLE 2
Termitomyces species used for the training dataset.
TABLE 3
Information of Termitomyces species in Binh Duong Province, Viet Nam.
TABLE 4
Synthesize the performance of machine-learning algorithms.
TABLE 6
Result in comparison of the species identification of ITS sequences of termite fungi collected in Binh Duong province, Vietnam, with the identification on NCBI. | 6,056.4 | 2023-10-11T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Hepatoid adenocarcinoma of the stomach: a unique subgroup with distinct clinicopathological and molecular features
Objectives Hepatoid adenocarcinoma of the stomach (HAS) is characterized by histological resemblance to hepatocellular carcinoma and a poor prognosis. The aim of this study is to elucidate the clinicopathological and molecular characteristics of HAS. Methods Forty-two patients with HAS who received gastrectomy were enrolled in this study. Based on a panel of 483 cancer-related genes, targeted sequencing of 24 HAS and 22 clinical parameter-matched common gastric cancer (CGC) samples was performed. Prognostic factors for overall survival (OS) and disease-free survival (DFS) were analysed with the Kaplan–Meier method. Results The most frequently mutated gene in both HAS and CGC was TP53, with a mutation rate of 30%. Additionally, CEBPA, RPTOR, WISP3, MARK1, and CD3EAP were identified as genes with high-frequency mutations in HAS (10–20%). Copy number gains (CNGs) at 20q11.21-13.12 occurred frequently in HAS, nearly 50% of HAS tumours harboured at least one gene with a CNG at 20q11.21-13.12. This CNG tended to be related to more adverse biobehaviour, including poorer differentiation, greater vascular and nerve invasion, and greater liver metastasis. Pathway enrichment analysis revealed that the HIF-1 signalling pathway and signalling pathways regulating stem cell pluripotency were specifically enriched in HAS. The survival analysis showed that a preoperative serum AFP level ≥ 500 ng/ml was significantly associated with poorer OS (p = 0.007) and tended to be associated with poorer DFS (p = 0.05). Conclusion CNGs at 20q11.21-13.12 happened frequently in HAS and tended to be related to more adverse biobehaviour. The preoperative serum AFP level was a sensitive prognostic biomarker for DFS and OS. Electronic supplementary material The online version of this article (10.1007/s10120-019-00965-5) contains supplementary material, which is available to authorized users.
Recent studies have demonstrated that the potential underlying mechanism of HAS may be the common embryonic origin of the stomach and liver from the foregut and that HAS may evolve through genetic progression and/or genetic divergence [7][8][9]. However, the exact molecular mechanism of HAS is very unclear. Although the TCGA Research Network has defined four major genomic subtypes of gastric cancer-Epstein-Barr virus (EBV)-infected tumours, microsatellite instability (MSI) tumours, genomically stable tumours, and chromosomally unstable tumours [10]-HAS cannot be classified as any of these. Furthermore, retrospective studies showed that none of the patients with elevated levels of AFP mRNA included in the TCGA dataset can be identified as having hepatoid carcinomas due to the lack of HCC-like morphology [11][12][13], indicating that the HAS subtype is genetically distinct. Like any other carcinoma, HAS is a heterogeneous cancer with different clinical outcomes, biological behaviours, and genetic alterations. Moreover, therapeutic targets specific to this unique subgroup have not been identified.
To better characterize this subset, we analysed 42 gastric adenocarcinomas with at least one focal component resembling HCC differentiation. By using next-generation sequencing (NGS), we aimed to establish a molecular/clinicopathological concept of HAS and to identify new therapeutic targets for this unique cancer.
Case selection and clinicopathological factors
Between 2008 and 2017, 61 patients were diagnosed with HAS at Beijing Cancer Hospital, China. The present study enrolled 42 HAS patients who underwent surgical treatment, including 36 patients with radical resection for gastric cancer and 6 with palliative gastrectomy. Clinical parameters, including age, sex, serum AFP level at diagnosis, primary lesion site, and metastasis status including liver and peritoneal metastasis, were obtained by reviewing the medical records. All tumours were staged according to the TNM staging system of the American Joint Committee on Cancer (7th version, 2009). This study was approved by the ethics committee of Beijing Cancer Hospital. All patients gave written informed consent to allow the use of their tissues for medical research.
Evaluation of immunohistochemical staining
Pathological diagnosis of HAS was based on the identification of histological features resembling HCC. There was no quantity requirement for the diagnosis of histological hepatoid differentiation; the diagnosis of HAS could also be made for some patients presenting with focal differentiation. AFP staining was evaluated by two pathologists based on the percentage of stained cells and the staining density. The scores for the percentage of stained cells were classified into three groups: 0 for no stained cells, 1 for 1-50% stained cells, and 2 for 51-100% stained cells. The staining density was scored from 0 to 3: 0 for no staining, 1 for slight staining, 2 for moderate staining, and 3 for intense staining. The two scores were multiplied, resulting in the final stratification of groups by AFP immunohistochemistry (IHC) score: the 1-3 points group and the 4-6 points group.
Ten haematoxylin and eosin-stained slides for every patient were examined by two pathologists to confirm the HAS cell component percentage. Tumours were classified according to HAS cell component percentage with a cutoff value of 50%. In addition, the following histological features were also recorded: tumour size, tumour invasion depth (T stage), lymphovascular invasion, and nodal metastasis. The histological type was determined according to the Lauren classification.
Sample collection
All resected specimens were subjected to a uniform preparation protocol for formalin-fixed, paraffin-embedded (FFPE) specimens. Tumour and corresponding nontumour samples were collected from each patient.
DNA was extracted using a QIAamp DNA FFPE Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The quantification of genomic DNA samples was assessed with a Nanodrop 2000 spectrophotometer (Thermo Fisher Scientific, Inc., Wilmington, DE, USA).
DNA was stored at − 20 °C for future use. At least 500 ng of DNA was required to perform the NGS library preparation. The library was prepared by using KAPA Biosystems library preparation reagents. The library had an average fragment size of 140-200 bp. Hybrid capture was performed by using NimbleGen Capture reagents.
Sequencing and data processing
Two fastq files were generated per sample, corresponding to full-length forward and reverse reads. The sequenced raw data were subjected to quality control, including assessment of the base sequence quality, sequence content, GC content and sequence length distribution, and relative percentages of unmatched indices.
The reads were aligned to the hg19 b37 version of the human genome. When calling mutations, an average sequence coverage of ≥ 1000 × was required.
Single-nucleotide variant (SNV) and indel calls were subjected to a series of filtering steps to ensure that only highconfidence calls were admitted to the final manual review step. Mutations were annotated by using Annovar software. The somatic MSI status was inferred by interrogating all available genomic microsatellites covered by the 483-gene panel within tumour samples and comparing them against those in the matched normal sample DNA using the MSIsensor program. The specific genes included in the 483-gene panel are listed in Supplementary Table 1.
Survival analysis and statistical analysis
All patients were regularly followed up from the date of first hospitalization at our centre. The final follow-up date was November 1, 2018. Relapse was defined as local recurrence or distant metastasis. The disease-free survival (DFS) time was calculated from the date of radical surgery to the date of relapse. The overall survival (OS) time was calculated from the date of diagnosis to the last day of follow-up or the date of death. SPSS 21.0 software was used for statistical analysis. Pearson's Chi-square test was used to assess the differences between variables. Fisher's exact test was used when the values were less than five. Survival durations were calculated using the Kaplan-Meier method. For all tests, a P value of < 0.05 was considered significant.
Clinicopathological findings
A total of 42 patients with HAS who received gastrectomy were evaluated (age 41-78 years, median age 62 years), 36 of whom underwent radical operation and 6 of whom received palliative resection. Most (90.5%) of the HAS patients were male.
AFP is considered the most representative marker of HAS. In our study, serum AFP levels were elevated (> 7 ng/ ml) in 20 of the 24 patients. The median AFP level at the time of diagnosis was 236 ng/ml (range 5.3-7335 ng/ml). Of the 24 patients, 10 (41.7%) had a serum AFP level of ≥ 500 ng/ml.
Regarding the primary lesion site, 12 (28.6%) tumours were located at the gastroesophageal junction (GEJ), 5 (11.9%) were located in the gastric body, and 24 (57.1%) were located in the gastric antrum. The majority of HAS (82.1%) tumours coexisted with poorly differentiated adenocarcinoma, whereas only 17.9% coexisted with well-differentiated or moderately differentiated adenocarcinoma.
Concerning the recurrence patterns, the rate of liver metastasis was as high as 44.4% in the present study population, as expected, whereas only 5.6% of the cases were complicated with peritoneal metastasis. Supplementary Table 2 summarizes the patients' characteristics.
Correlation analysis of HAS cell component percentage and serum AFP level, and AFP immunohistochemical staining
Tumours were classified according to the HAS cell component percentage: the cutoff value was 50%, and the percentage of the HCC-like differentiation component varied from 5 to 100%, with a median value of 35%. The AFP IHC results were also classified into two groups: 1-3 points and 4-6 points ( Fig. 1a-f).
We further analysed the relationship between the HAS cell percentage, serum AFP level and AFP immunohistochemical staining score. Fisher's exact analysis showed that the HAS cell component percentage was associated with the serum AFP level and AFP IHC score, but only the association with the serum AFP level was statistically significant (p = 0.003). Moreover, the serum AFP level was in good accordance with the AFP IHC score (Fig. 2a, b). In addition, we found that SALL4, a novel stem cell biomarker, often showed strong positivity in HAS.
Mutational analysis
From the 42 HAS patients, 24 patients with histologically typical cases were enrolled for NGS. Additionally, we randomly chose 22 clinical parameter-matched patients with common gastric adenocarcinoma (common gastric cancer; CGC) for NGS. The genetic alterations identified are summarized in supplementary Table 3.
We next aimed to investigate the genomic differences between HAS and CGC patients sharing similar clinicopathologic parameters. Among the 24 HAS and 22 CGC sample sequencing results, 1 CGC sample was detected as microsatellite instability high (MSI-H) and 1 HAS and 3 CGC samples had insufficient DNA for analysis, leaving 23 and 18 patients in the HAS and CGC groups, respectively, for NGS analysis. The matched clinicopathological parameters of the two groups are summarized in Supplementary Table 4. Normal tissue adjacent to the tumour (NAT) is commonly used as a control in cancer studies [14,15], so we did so in our study. After filtering shared mutations found in both tumour tissue and NAT, we identified 167 mutations in 94 genes and 168 mutations in 101 genes in HAS and CGC tumour tissues, respectively.
Copy number variation analysis
In addition to gene mutation, copy number variation (CNV) has been shown to be associated with the risk and prognosis of different cancers. Since gene amplification is an important mechanism of carcinogenesis, providing a means for the overexpression of cancer-promoting driver genes, CNV was analysed in 23 HAS and 18 CGC patients in this study. CNV analysis was performed using the event-wise testing algorithm based on the read depth of coverage, according to a previous report [17], and samples with copy numbers higher than 3.2 were considered to exhibit copy number gains (CNGs) and were used for further analysis. Notably, a subset of patients with HAS was found to have increased CNGs at the 20q locus; a total of 11 HAS cases (nearly 50%) were found to harbour at least one CNG in genes located at 20q11. 21-13.12, while no such changes were detected in CGC tumour tissues (Table 2).
Pathway enrichment analysis
Furthermore, we mapped genetic alterations in HAS and CGC tumour tissues, including mutations and CNGs, to different pathways and found that several pathways were significantly enriched in the altered genes (Fig. 3). The ErbB signalling pathway, PI3K-Akt signalling pathway, and p53 pathway were the shared enriched altered pathways in both HAS and CGC. In addition, we found that the HIF-1 signalling pathway and signalling pathways regulating the pluripotency of stem cells were specifically enriched in HAS.
Fig. 3 Enriched pathways in HAS and CGC tumour tissues
Enriched pathways in HAS and CGC are listed in Supplementary Table 5.
Association between CNGs at 20q11.21-13.12 and clinicopathological parameters
Since CNGs at 20q11.21-13.12 were the most frequent genetic alteration in HAS, to further investigate the clinical relevance of this alteration, we next analysed the association between CNGs at 20q11.21-13.12 and the clinicopathological parameters of patients with HAS. In the HAS cohort, CNGs at 20q11.21-13.12 were observed in half of the patients, and we found that the tumours of patients with CNGs at 20q11.21-13.12 might be more aggressive than nonamplified tumours, including poorer differentiation, greater vascular and nerve invasion, and greater liver metastasis, although these differences were not statistically significant (Table 3).
In addition, patients with CNGs at 20q11.21-13.12 had a trend of higher serum AFP levels and a higher HAS component percentage, although the difference was not statistically significant.
HAS prognosis
The final follow-up date was November 1, 2018. As of the final follow-up, 16 patients experienced relapse and 20 patients were relapse free. Of the 16 relapsed patients, 13 (81.3%) exhibited recurrence within 1 year after radical gastrectomy; the median duration to recurrence was 6.67 months (1.0-30.7 months), and the median followup period of the relapse-free patients was 44.6 months (16.8-120.9 months).
Notably, patients with HAS were very prone to develop liver metastasis; as expected, 72.7% of the relapsed patients in this study had liver metastasis.
The survival analysis revealed that a preoperative serum AFP level of ≥ 500 ng/ml was significantly associated with poorer OS (p = 0.007) and tended to be associated with poorer DFS (p = 0.05, Fig. 4a-b).
Discussion
Since Ishikura et al. first proposed the new entity HAS in 1985 [18], sporadic cases of this cancer have been reported [19]. HAS is characterized by HCC-like differentiation and is associated with a high incidence of liver metastasis and a poor prognosis.
Similar to the results of previous research [3], this study showed that HAS was characterized by older patient age, male patient sex, intestinal type, and high liver metastasis frequency.
A few studies have examined the histogenesis or development of this cancer and found that the hepatoid cell component was observed only in invasive lesions, indicating that HAS developed from CGC in the mucosa and differentiated into HAS during the process of tumour invasion and proliferation, acquiring the ability to produce AFP [7]. Our result was consistent with this hypothesis, by revealing that the higher the HAS cell component percentage in a tumour, the more is the AFP secreted by the tumour.
Recent attempts have been made to apply NGS technologies to clinical practice, developing innovative precision treatment for different subtypes of gastric cancer. Several comprehensive datasets have revealed many genetic characteristics of gastric cancer [10,20,21]. The assemblers of one of these datasets, the TCGA Research Network, defined four major genomic subtypes of gastric cancer: EBV-infected tumours, MSI tumours, genomically stable tumours, and chromosomally unstable tumours. However, retrospective analysis of the TCGA dataset did not provide any information about genetic alterations in HAS, due to its rarity and geographical distribution [11]. The vast majority of HAS case reports are from Asian regions, especially Japan and China. A previous report by Akazawa et al. revealed that HAS may be subcategorized as a solid type of gastric adenocarcinoma with enteroblastic differentiation (GAED) using NGS for 24 patients with GAED, including 3 HAS cases. The most obvious molecular feature of GAED has been reported to be high-frequency TP53 mutations and CNG of ERBB2 [16]. However, our results identified a TP53 mutation rate of 30% in HAS, which is significantly lower than that in GAED (79.2%) and conventional GC (48.0% in the TCGA database) [10,16], but similar to that in another report by Akiyama in which 33% (5/15) of patients with HAS had a TP53 mutation [7]. In addition, CNG of ERBB2 was not frequently detected in our study. All these differences indicate that HAS may be a genetically distinct subgroup compared with GAED, although many overlapping clinicopathological features exist between the two subcategories of gastric cancer, including a solid pattern and AFP expression.
Considering the existing literature contains very limited information on the molecular features of HAS, we performed NGS with a 483-gene panel in 24 HAS cases and found that DNA CNG at 20q11.21-13.12, which was not detected in CGC, was clearly the most frequent genetic alteration in our HAS cohort. Previous research suggested that 20q amplification might serve as a cancer-initiating event in the development of many cancers [22,23]. In GC, CNGs at 20q11-13 were detected in 20-30% of cases [24]; moreover, CNVs tended to be more common in intestinal-type cancers than in diffuse-type cancers [25]. In our HAS cohort, the rate of CNG at 20q12.21-13.12 was as high as 50%. More importantly, with respect to associations between clinicopathological characteristics and genetic alterations, CNGs at 20q 11.21-13.12 were associated with more adverse tumour biobehaviour. Therefore, we hypothesize that potential driver genes located at 20q11.21-13.12 may contribute greatly to the carcinogenesis and development of HAS. Among these amplified genes, TOP1 was found to have the highest amplification frequency. In colorectal cancer, the rate of TOP1 CNG has been reported to range from 53 to 84% [26,27]; moreover, in a metastatic setting, a borderline significant association (p = 0.007) between an increase in the TOP1 CN and an objective response to second-line treatment with irinotecan monotherapy has been reported [28]. Although no relevant reports have been presented in previous GC studies, in our study the high frequency of TOP1 CNG in HAS, but not in CGC, suggested that this alteration may become a useful predictive biomarker for TOP1-targeting therapy.
Another interesting observation to note is that SALL4, a novel stem cell gene, is located at 20q13.2. A member of the spalt-like (SALL) gene family (SALL1 to SALL4), SALL4, is a key factor in the maintenance of embryonic stem cell pluripotency and self-renewal [29,30]. An oncofoetal protein similar to AFP, SALL4 is highly expressed in both the murine and human foetal liver, and its expression declines gradually during development and is silenced in adulthood. SALL4 re-expression is recognized in various cancers and is considered an adverse prognostic factor in HCC, breast cancer, and lung cancer [31,32]. In gastric cancer, SALL4 was reported to play oncogenic roles through the modulation of epithelial-mesenchymal transition (EMT) and cell stemness [33]. In our study, SALL4 expression was detected in 94.7% and 10.5% of HAS and CGC samples, respectively, similar to the 89.0% vs 15.0% noted in a previous report [5]. In addition, pathway enrichment analysis showed that signalling pathways regulating the pluripotency of stem cells were specifically enriched in HAS. Combining all these results, we speculate that SALL4 may play an essential role in HAS carcinogenesis and that it can be considered a novel target for HAS diagnosis and therapy. Given the oncogenic role of SALL4 and its specific expression in a subset of cancers, its usefulness as a therapeutic target has been explored. In HCC cell lines, the SALL4 expression status was associated with histone deacetylase activity, and treatment with a histone deacetylase (HDAC) inhibitor successfully suppressed the proliferation of SALL4-positive HCC cells [34]. In addition, SALL4-expressing lung cancers are sensitive to treatment with the HDAC inhibitor entinostat [35]. Since there is no relevant report on HAS, further investigations on whether SALL4 targeting can treat HAS should be performed.
Overall, our NGS data showed that HAS is highly genetically distinct, as reflected by the frequent CNGs at 20q12.21-13.12. In addition, the mechanism of genomic instability in HAS may be associated with the overexpression of the stem cell marker SALL4. The specific relation between these characteristics needs further research in the future. To conclude, by comprehensively investigating the molecular features of HAS using NGS, several markers may be considered as therapeutic targets in the future.
As a rare, unique type of gastric cancer, HAS has been reported by many authors to exhibit more aggressive biobehaviour and poorer prognosis than CGC. However, the link between AFP and survival has historically been controversial. In H. Katai's study, for example, the 5-year survival rate was 34%, and survival after surgery was found to not be linked to the preoperative serum AFP level [36]. However, other studies indicated that a high level of serum AFP is an independent prognostic factor in gastric cancer [37]. The 5-year survival rates for patients with gastric cancer with AFP ≤ 20 ng/ml, 20 < AFP ≤ 300 ng/ ml, and AFP > 300 ng/ml were 45.8, 17.8, and 0%, respectively [38].
In the present study, the prognosis of HAS patients who received radical surgery was not as poor as previously thought, with a 5-year survival rate of 41.1%. However, the evaluation of more cases and the use of a longer followup period are required to confirm these findings and draw accurate conclusions. However, interestingly, a markedly elevated preoperative serum AFP level (≥ 500 ng/ml), but not a higher HAS cell component percentage was significantly associated with poorer DFS and OS in HAS. Indeed, serum AFP, as the most representative biomarker, has been found to play a critical role in the initiation of HCC progenitor/stem cells [39]. Additionally, among GCs, AFP-producing GCs have higher malignant potential (high proliferative activity, weak apoptosis, and rich neovascularization) than AFP-negative GCs; furthermore, interfering with AFP expression reduced cell invasion and metastasis by enhancing anoikis sensitivity [40]. Therefore, we hypothesize that serum AFP not only is involved in the initiation of HAS, but also plays important roles in tumour progression and invasion.
In summary, this study showed that HAS is genetically characterized by CNGs at 20q11.21-13.12. Investigating and targeting potential driver genes at this locus may provide novel personalized therapies for this rare subtype of GC. The serum AFP level is a prognostic biomarker in HAS, which may also be therapeutically exploitable.
Conclusions
In conclusion, our analysis showed that CNGs at 20q11.21-13.12 happened frequently in HAS and tended to be related to more adverse biobehaviour. In HAS, the HAS cell component percentage is consistent with the serum AFP level. The preoperative serum AFP level was a sensitive prognostic biomarker for DFS and OS. | 5,133.8 | 2019-04-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Entropy-based dynamic graph embedding for climate change detection
Climate change is a severe problem caused by abnormal climate events. The existing methods for detecting climate changes utilize statistical models to analyze the atmospheric temperature, but a climate event commonly comprises multiple meteorological data. To detect climate changes using meteorological data, we propose a novel dynamic graph embedding model based on graph entropy called EDynGE. A climate event is denoted as a graph, in which the nodes indicate meteorological data and edges indicate the correlation between nodes. Graph entropy measures the information of the climate event, and the EDynGE model clusters graphs based on graph entropy. We conducted experiments on real meteorological data. The results showed that the number of days of abnormal climate events has increased by 304.5 days in the past 30 years.
Introduction
Climate change is a severe problem that leads to the redistribution of global precipitation, melting of glaciers, and rise in sea levels 1,2 . Furthermore, it endangers the balance of the natural ecosystem and threatens the survival of humans. The main reason for climate change is that the terrestrial greenhouse gas emissions cause the atmospheric temperature to rise on the mainland 3 . Existing research shows that in the 20th century, the world's average temperature showed an upward trend 4 . Therefore, climate change detection can help people identify the causes of ecosystem damage and suggest corresponding countermeasures 5 .
Existing data analysis methods for detecting climate changes are based on statistical models that identify temporal and spatial information from meteorological data 6 . As a simple example, most of the statistical analysis usually measures the average temperature and compares it with past climate conditions to detect whether the climate is changing. These methods focus on detecting climate change using single meteorological data. Climate events comprise multiple meteorological data, and the correlation of these data plays an essential role in climate change detection.
To solve this problem, we identified a spurious relationship between meteorological data in which two correlation data are not causally related 7 . A graph was constructed to model the climate event using these spurious relationships. The vertices of the graphs indicate meteorological data, whereas the edges indicate the spurious relationship between two vertices. To obtain information from the climate graph, we calculated the graph entropy using the spurious correlation coefficient. We constructed a dynamic graph embedding model based on graph entropy to cluster the climate graphs. Climate change was detected as an abnormal climate event in a time interval where the spurious correlation coefficients are different from most other time intervals, which is defined as follows.
Definition 1 (Abnormal climate event) An abnormal climate event is defined as a graph G i in the i th time interval in which the weights of the edges are significantly different from those in other time intervals. It is formulated as is the entropy of the climate event G i and θ is the threshold for detecting climate change. Figure 1 shows the climate events in three time intervals where T indicates temperature, P indicates pressure, S indicates wind speed, G i indicates the climate event in the i th time interval, and the weight of an edge indicates the spurious correlation coefficient. The temperature at G 2 increases by 2 • C, which leads to a difference in the weights of edges and those of the other two time intervals. This indicates that the climate has changed at G 2 .
The main contributions of this work are as follows.
• We present the graph entropy to measure the information of climate events, which is calculated using the spurious correlation coefficient.
• We conducted experiments on real meteorological datasets using the EDynGE model. The results showed that the days of the abnormal climate events exhibit an upward trend from 1990-2020.
The remainder of this paper is organized as follows. In the next section, the methodology of entropy-based dynamic graph embedding for detecting climate change is detailed. Then, the experimental results are described. Finally, in the discussion section, the conclusions, limitations, and future works are provided.
Methods
This section describes the dynamic climate graph, graph entropy, and EDynGE model. A dynamic graph is used to model climate events in each time interval to detect climate change. The graph entropy measures information regarding the climate event. The meteorological datasets are allowed to collect from the China meteorological data service center (http://data. cma.cn/en) by registering an account.
Graph Construction
The graph is denoted as G(V, E), where V and E denote the vertices and edges, respectively. For a climate event, the vertex indicates the meteorological data, the edge denotes the spurious relationship, and the weight w indicates the spurious correlation coefficient. The coefficient is calculated based on the causality and correlation between two time series, x and y. The time series causality is defined as follows.
Definition 2 (Time series causality) The causality of two series is defined as that if one of the series improves the prediction of the other, which is formulated as where x and y are two time series, C(x, y) indicates the causality between them, and p is the probability that the two series are not causally related.
The Granger causality test is utilized to calculate the short-run causality between two meteorological time series, x and y 8 . The test makes a null hypothesis that the two series are not causally related and includes two predictions. Firstly, it uses the past values of the series y as variables to predict the current y. Then, it uses the past values of series x and y as variables to predict the current y. If the prediction result obtained using the temporal information of two series x and y is better than the prediction only using the series y, then x helps predict y. The t-test was utilized to compare the difference between two prediction results 9 . The p value was used to denote the probability of the null hypothesis. If the p value was more than 0.05, then the two series x and y were said to not be causally related 10 .
The weight value of an edge in the graph indicates the spurious correlation coefficient that is calculated using the causality and Pearson correlation coefficient (PCC) 11 , which is formulated as follows.
where C(x, y) indicates the causality between the two series x and y. R(x, y) indicates the spurious correlation coefficient between the two series. The spurious correlation coefficient is inversely proportional to the causality between the time series x and y. If the causality C(x, y) between two series x and y is 0, then there is no spurious correlation between the two series. In this case, the corresponding spurious correlation coefficient R(x, y) is 1.
Graph Entropy
The graph entropy is calculated based on information entropy 12 . We assume that there are two independent events, x and y. The information of these events should be satisfied as h(x, y) = h(x) + h(y), where h(x) indicates the information of the event x, and h(x, y) indicates the information of these two events occurring at the same time. The probability of these events should be satisfied as p(x, y) = p(x) × p(y), where p indicates the probability of the event. The information of the event x can be measured as h(x) = −log 2 p(x). Information entropy can be represented as the information of the event x times the probability of x, which is formulated as e(x) = −p(x)log 2 p(x). For a set of events X, the information entropy is formulated as where N indicates the number of events in the set and x i indicates the i th event. To calculate the graph entropy, we calculated the entropy for each vertex in the graph. The definition of the vertex entropy is defined as follows.
2/9
Definition 3 (Vertex entropy) Given a graph G = (V, E), the entropy of the vertex v i is defined based on the weight between the vertices v i and v j , which is formulated as e(v i ) = ∑ N j=0, j =i −w i, j log 2 w i, j , where N indicates the number of vertices. The weight value w i, j equals R(v i , v j ), which denotes the spurious correlation coefficient between two vertices v i and v j .
The graph entropy is calculated by summing the entropy of all vertices, which is formulated as e(G) = ∑ N i=0 e(v i ). The dynamic graph entropy is composed of the graph at time interval t ∈ [0, T ], which is formulated as E = {e(G t )|t ∈ [0, T ]}. The information of the climate event can be quantified using graph entropy. When one of the meteorological data points changes, the spurious relationship coefficients change, and the graph entropy also changes at the corresponding time interval. The abnormal climate event can be detected by obtaining graph entropy.
Entropy-Based Graph Embedding
The dynamic graph consists of graphs G t in the time interval t ∈ [0, T ], which is formulated as G = {G t |t ∈ [0, T ]}. Dynamic graph embedding is used to capture the temporal information of the dynamic graph G for learning a mapping function f : G t → g t , where g t is an embedding vector of the graph G t . The similarity of the entropy between the two graphs is The object of the entropy-based graph embedding reduces the distance between two graphs with similar entropy. To address this problem, we construct a dynamic supervised graph, which is defined as follows.
For the graph G t , the corresponding graph G i can be found from the dynamic graph G , where the similarity d(e(G i ), e(G t )) of the entropy between the two graphs is the smallest. The dynamic supervised matrix is a set composed of the graph G i , which is formulated as As shown in Figure 2, the dynamic graph is formulated as The entropies of the vertices v 1 and v 2 in G 0 can be calculated as e(v 1 ) = 0.241 and e(v 2 ) = 0.267, respectively. The entropy of graph G 0 is e(G 0 ) = 0.292 + 0.241 + 0.267 = 0.800. The entropy of graphs G 1 and G 2 are e(G 2 ) = 0.796 and e(G 2 ) = 0.848, respectively. The similarity of the entropy between the graphs G 0 and G 1 is d(e(G 0 ), e(G 1 )) = 0.004. The similarity between the graphs G 0 and G 2 is d(e(G 0 ), e(G 2 )) = 0.048. Therefore, the nearest graph from G 0 is G 1 , the nearest graph from G 1 is G 0 , and the nearest graph from G 2 is G 0 . The dynamic supervised matrix is thus denoted by S = G 1 , G 0 , G 0 .
We utilize two autoencoders to reconstruct the dynamic graph and supervised graph. The two autoencoders share parameters with each other. Figure 3 shows the architecture of the EDynGE model, where G t and S t indicate the climate graph and the supervised graph, respectively. The embedding vectors of G t and S t are denoted as g t and s t , respectively. The autoencoder includes an encoder and decoder. We use y i to indicate the i th layer of the encoder, and y i is used to denote the i th layer of the decoder. The autoencoder reconstructs the input data using the encoder and decoder to calculate the graph's embedding vector. The encoder uses non-linear functions to extract the features for mapping the graphs into the embedding space, which are formulated as where δ is an activation function. W i and b i indicate the weight and basis in the i t h layer, respectively. The ReLU function is utilized as the activation for making the neural network non-linear, which is formulated as f (y i ) = max(0, y i ) 13 . The decoder reconstructs the graph from the embedding vector, which is calculated by reversing the encoder's computation. The purpose of dynamic embedding is to reduce the distance between two graphs that have a similar entropy in an embedding space. Therefore, we establish a loss function based on the similarity of the graph entropy in the embedding layer, which is formulated as L s = 1 T ∑ T t=1 ||g t − s t || 2 2 , where T indicates the number of time intervals. The graph G t and supervised graph S t have the smallest similarity on graph entropy. Thus, the function L s reduces the loss between a t and s t to reduce the distance between two graphs in the embedding space. An autoencoder is used to reconstruct the input so that we have to establish a loss function for reducing the loss between the input and output, which are formulated as To avoid overfitting, we establish a regularization term that is formulated as L reg = 1 2 ∑ I i=0 (||W i || 2 2 + || W i || 2 2 ), where W i and W i indicates the weight of the encoder and decoder in the i th layer. The joint loss function is established using the functions L s , L 1 , and L reg , which is formulated as We utilize the gradient descent algorithm and backward propagation algorithm 14,15 to train the model. Gradient descent is used to calculate the weight and basis in the output layer, which are formulated as W I = W I − η ∂ L ∂W I and b I = b I − η ∂ L ∂ b I , where I indicates the output layer. Each layer's weight and basis are calculated using the backward propagation algorithm, which calculates the partial derivation of the loss function based on the chain rule for updating each layer's weight and basis.
Results
In this section, we apply the EDynGE model to real meteorological data and use local outlier factor (LOF) 16 , isolation forest (IF) 17 , and box-plot (BP) 18 methods to detect the abnormal climate events in an embedding space. IF shows that the distribution of outliers is sparse, and these outliers are far away from the normal observations with high density. Thus, the outliers can be easily separated. LOF detects outliers based on the density of the data points. The BP method is based on statistical indices for detecting outliers and requires the dataset to have a normal distribution.
Dataset
Daily climate data from the Chinese surface stations of 10 provinces were used to conduct experiments. According to the nationwide surface climate statistical method 19 , these datasets were derived from various provincial meteorological bureaus through statistical compilations. The datasets were collected from 194 basic and reference surface meteorological observation stations and automatic weather stations in China from 1951. Each dataset included 18 elements, including mean pressure, mean temperature, and precipitation. In this study, we collected meteorological data from 1990-2020 to evaluate the EDynGE model.
Evaluation Metrics
Because the datasets are unlabeled, we propose using two different ways to evaluate the EDynGE model. The first way is to label a certain number of data points as outliers. These data points are embedded vectors of climate events. The selection rules for these outliers are as follows. We assume that 10% of data points are selected as outliers in each dataset. The embedding vector of the t th graph is denoted as g t . The center of the embedding vectors can be formulated as c = 1 T ∑ T t=1 g t , where T is the number of time intervals. If the entropies of the graphs are similar, the embedding vectors of these graphs are close to each other, and the outliers are far from the normal observations. The 10% data points farthest from the center is selected as outliers. The EDynGE model can be evaluated using accuracy and F-score.
In the second way, we propose a hypothesis based on global warming that with increasing temperature, the number of days of abnormal climate events also increases. We counted the number of days with abnormal climate events every year, every five years, and every decade. If the number of days of abnormal climate events exhibited an upward trend, then climate was shown to have changed in the past 30 years. EDynGE can be used to detect this climate change. Figure 4 shows the days of abnormal climate events in the four provinces obtained using the IF method. The results of every year show that the frequency of abnormal climate events exhibits an increasing trend. We calculated the days of abnormal climate events every five years, the results showed that four provinces exhibited a non-linear increasing trend. However, a local minimum value in Guangzhou and Shanghai was observed from 2005-2010, and Beijing had a local minimum value from 2000-2005. The results of every decade indicated that three provinces Beijing, Shandong, and Shanghai showed an upward trend. Guangzhou showed a falling trend first followed by a rising trend. According to the experimental results, the detected climate change results conform with the hypothesis in most cases.
Analysis of Results
To conduct a comparison, we utilized a graph convolutional neural network (GCN) and a dynamic graph to a vector-based model (dyngraph2vec) as baselines 20,21 . The GCN uses convolutional kernels to capture the spatial information of vertices in the graph. Because GCN is applied to the static graph, it does not consider the temporal information of the dynamic graph. Dyngraph2vec is an unsupervised learning model for embedding dynamic graphs. It provides the three dyngraph2vec-based models, which are autoencoders (dyngraph2vecAE), recurrent neural network (dyngraph2vecRNN), and the autoencoder-based recurrent neural network model (dyngraph2vecAERNN). Dyngraph2vecAE cannot extract the temporal information from the dynamic graph since the model computes the embedding vectors by reconstructing the graphs. Dyngraph2vecRNN utilizes the idea of the skip-gram to consider the temporal information of the graphs 22 . It computes the embedding vector of the current graph by using the graphs around the current graph. Table 1 shows the accuracy of the models using the IF method under 10% outliers. According to the experimental results, the EDynGE model exhibits the best performance in all provinces. GCN achieves better accuracy than dyngraph2vecAE in 6 provinces. The dyngraph2vecAE method achieves better accuracy than the other two dyngraph2vec models. Because GCN, AE, and EDynGE do not capture the temporal information of the dynamic graph and they perform better than the other two models that extract the temporal features, the effect of temporal information is negligible. Table 2 shows the performance of the EDynGE model with 10% outliers. According to the results, the IF method achieved the best accuracy and F1-score for the eight provinces. On the other hand, it achieved a lower accuracy and F1-score than the LOF method for Beijing and Shanxi. The BP method achieved a better F1-score than LOF for seven provinces, but the scores were worse than those obtained for IF. Overall, the experimental results show that IF exhibited the best performance for most provinces.
To obtain performances with different embedding sizes, we evaluated the stability of the EDynGE model using mean ± std, where std indicates the standard deviation. Figure 5 shows the stability of the EDynGE model in Beijing. According to the experimental results, the IF method achieved the best performance based on the average F1-score and had the lowest std among the three outlier detection methods. This indicates that IF is the most stable method compared with the other two methods.
We conducted experiments to validate the EDynGE model with different ratios of outliers. Figure 6 shows the performance of the model on Beijing province by choosing three different ratios of outliers. According to the experimental results, the IF method achieved the best accuracy for 10% outliers. The LOF and BP methods achieved the best performance for 5% outliers. If the ratio of outliers was low, the imbalance of labels reduced the performance of the EDynGE model. Therefore, the three methods showed the lowest accuracy under 3% outliers.
Discussion
In this study, we proposed an EDynGE model to detect climate changes. The model uses the spurious correlation coefficient to calculate the graph entropy and reduces the distance between two climate events with similar graph entropy. We conducted experiments to validate the performance and stability of the EDynGE model for climate change detection. The results showed that the IF method exhibited better results than the LOF and BP methods by 32.5% and 12.7% in terms of F1-score, respectively. The EDynGE model performed better than the other dynamic graph embedding models by 37.9% in terms of accuracy. Based on global warming, we hypothesized that with an increase in temperature, the number of days of abnormal climate event exhibits an upward trend. The experimental results showed that the number of days of climate change increased by 304.5 days from 1990-2020, which agreed with the hypothesis. This indicates that the EDynGE model can detect climate change.
This study has some limitations. The EDynGE model can cluster graphs based on graph entropy. However, graphs with different neighbor structures have the same graph entropy in some cases. The EDynGE model cannot detect outliers with an abnormal neighbor structure. This indicates that the EDynGE model ignores the spatial information of the dynamic graph. To overcome this issue, we plan to construct a hybrid model that consists of the neighbor structure similarity and graph entropy similarity for detecting outliers in multiple time series. The second limitation is that the EDynGE model is based on an autoencoder that does not capture the temporal information from the dynamic graph. Although the temporal information is negligible in the research problem, it also needs to be considered in dynamic graph embedding. To overcome this problem, we plan to construct an autoencoder model using the long short-term memory architecture to discover temporal features from the dynamic graph.
We propose a novel idea to detect outliers from multiple time series. It utilizes the correlation of the time series to construct a dynamic graph and detects the outlier from the dynamic graph. The outlier detection problem is transformed from the multiple time series domain to the dynamic graph domain. It can help people find the causes of the outliers by obtaining the evolution of graphs. For example, in the financial time series, abnormal trends in the stock market can be detected and analyzed using the EDynGE model. Furthermore, the digital twin technology is developing rapidly. It utilizes the sensors to record the digital information for simulating the condition of object in the physical space. The proposed is able to detect the anomalies from the recorded digital signals to diagnose faults from the physical. For example, the transmission failure in the machines and the structural damage in the buildings can be detected by using the proposed idea. | 5,401.2 | 2020-12-29T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Investigation of Hydrophysical Properties and Corrosion Resistance of Modified Self-Compacting Concretes
Improvement of hydrophysical properties and corrosion resistance of self-compacting concrete to the effects of alternate freezing–thawing and aggressive soils of Southern and Central Kazakhstan is of interest to a wide range of researchers from the side of practical application of the obtained results in construction practice. It is proposed to form a spatially reinforced fine crystalline structure of a cement matrix with the maximum dense packing by using a complex modifier (hyperplasticizer + polymer + microsilica + fibro fibers) in the composition of self-compacting concretes (SCCs). The introduction of the calculated amount of the above additives increases the operational reliability of the current SCC compositions, increasing the water resistance to W16, frost resistance to F = 500, increasing the compressive strength by 20%, and reducing the mass loss of samples during corrosion leaching to 50%. It has been experimentally established that the proposed addition of the complex modifier (hyperplasticizer + polymer + microsilica + fibro fibers) to the SCC composition allows obtaining self-compacting concrete of high quality with improved performance characteristics (compressive strength, water resistance, frost resistance, and corrosion resistance). Studies have shown that the complex modifier-modified SCC compositions have a high degree of resistance in aggressive environments and leaching corrosion. Based on the results of the conducted tests, it is possible to recommend the obtained SCC compositions for the production of building products working in the zone of alternating freezing–thawing and aggressive soils.
Introduction
A promising direction of construction material science is the improvement of technology and study of the operational properties of self-compacting concrete (SCC)-a type of concrete that is increasingly actively used in the construction industry of the Republic of Kazakhstan [1].
The undoubted advantages of SCC are high mobility, ability to compact under the action of gravity, to be placed in densely reinforced structures, and the possibility to eliminate the process of vibration paving of concrete, which improves the quality of concrete structures and provides certain advantages to the construction process.Recently, more attention has been paid in the construction industry to the development of SCCs with improved physical-technical, performance, and durability properties [2].The composition of SCC differs from that of conventional heavy concrete.In the composition of SCC, there is an increased content of sand and binder (cement and fillers), as well as, properly selected in a quantitative composition, superplasticizers based on polycarboxlates [3].SCC with a properly selected composition is more homogeneous, and its performance is more similar than that of conventional heavy concrete [4].At the same time, the requirements for SCCs for the manufacture of building products and structures are different from those for conventional concrete [5].Such special requirements are primarily due to the specifics of the production, transport, and laying of the SCC.In the process of the operation, the structures from SCC in the southern and central regions of Kazakhstan experience a constant impact of a whole complex of aggressive factors: the impact of groundwater, cavitation's effect of water flows (on hydromeliorative structures), and frequent temperature fluctuations of ambient air passing through the 0 • C zone.The above-mentioned constantly acting aggressive influences reduce the reliability of structures and gradually destroy structures [6].
This progressively increases the number of defects, occurrence of sinks, and weakening of interstitial partitions and the cement-filler contact zone, which, in turn, leads to a significant loss of strength characteristics of the structure [7].The relevance of the study lies in the fact that there are very few studies devoted to the study of frost resistance and analysis of hydrophysical properties of self-compacting concretes, as these works mainly consider the study of the strength and deformative properties of SCC with the reflection of the solution of one or two specific problems, for example, solving the problem of crack resistance and shrinkage of concrete [8].
It is possible to increase the durability and improve the operational reliability of structures made from SCC by applying special protective cladding (monolithic and prefabricated), which reduces water filtration through the working surfaces.The use of protective cladding keeps filtration at 15-20%.However, the integrity of the thin protective coatings (0.2 mm thick) is compromised by even minor mechanical impacts, which is a major problem in their use.Scientists are searching for effective materials, for example, polymer compositions capable of resisting both mechanical effects and aggressive environmental influences.Unfortunately, often-proposed polymer compositions involve the use of expensive chemical components, which hinders their widespread use [1].It should be noted that important measures for the further development and improvement of construction under the influence of an aggressive environment are the improvement of work quality and optimization of the construction time, as well as rational use of the properties of applied concretes and improvement of their quality [9].
Concretes with Portland cement clinker-based binders are still the basis of civil engineering.In modern construction, predominantly modified concretes are used, which make it possible to ensure a given level of quality [10].Even a small amount of modifying components can dramatically change the process of structure formation and produce durable concrete with improved physical and mechanical characteristics [11].Multicomponent modifiers on the chemical-mineral basis, the use of which allows to obtain a dense durable structure, finding practical application in construction [12].In order to obtain self-compacting concrete mixtures with the optimal performance and to improve the operational properties of hardened concrete, the use of chemical and mineral modifiers is rational and promising [4].Thus, improving the quality of self-compacting concrete is a problematic issue and requires its development taking into account the proposed technological method, which provides for the use of cement in its composition together with a complex modifier (hyperplasticizer + polymer + microsilica).It is further proposed to consider separately each component of the system and its role in the creation of self-compacting concrete with specified properties of strength, water resistance, and corrosion resistance [11].
After analyzing the requirements of construction companies, we proposed the idea of combining the useful properties of two types of concrete: SCC and fiber concrete-a composite material consisting of a cement matrix with a uniform distribution of oriented or chaotically arranged discrete fibers (fibers) of different origin in order to obtain a matrix that is as dense and resistant to hydrophysical effects as possible.Fiber concretes have enhanced deformation properties, primarily flexural tensile strength, low shrinkage deformations, and high abrasion resistance [8].Having reviewed the scientific literature on hydrophysical indicators and the corrosion resistance of concrete, having become acquainted with the works of scientists in the field of building materials science on the investigation of the impact of various types of corrosion on the concrete matrix, and focusing on the peculiarities of the soils of Southern and Central Kazakhstan, we came to the conclusion that, in this work, it is necessary to investigate the indicators of corrosion resistance of modified SCC with the use of fiber reinforcement [7,13,14].
The study of the hydrophysical characteristics of concrete products and structures is a multifactorial scientific process consisting of the study of several physical and technical properties, since it is important not only to ensure the specified properties of concrete at the stage of manufacture but also to maintain them during the entire period of operation of the structure.In order to conduct further experiments, a number of research objectives were formed: investigation of the influence of the self-compacting concrete composition parameters and the amount of fiber reinforcement on the frost resistance of SCC and determination of the quantitative dependencies reflecting this most important performance indicator of SCC from the point of view of practical application [15]; -investigation of the influence of the self-compacting concrete composition parameters and the amount of fiber reinforcement on water absorption, water resistance, and porosity of SCC and determination of the quantitative dependencies reflecting this most important performance indicator of SCC from the point of view of practical application [16,17]; -determination of the corrosion resistance of SCC in conditions of aggressive impact of saline soils of the southern regions of the Republic of Kazakhstan [18].
The purpose of the work described in this article is to study the improvement of the operational properties and reliability of building products from self-compacting concrete, operated under conditions of repeated freezing-thawing and an aggressive environment of soils in Southern and Central Kazakhstan.
Materials and Methods
This work uses theoretical and applied research methods.The theoretical study was based on the fundamental laws of concrete science in order to study in detail and compare the corrosion resistance of self-compacting concretes of different compositions.The applied research was aimed at experimental confirmation of the theoretical hypothesis according to the current standards.
All the components of self-compacting concrete are extracted and produced in the territory of the Republic of Kazakhstan (Almaty region, Almaty city, Kazakhstan).All studies and tests were carried out in accordance with the regulatory documentation in force in the territory of the Republic of Kazakhstan.
Characterization of Concrete Mix Components
Cement CEM I 32.5H produced by "Standart Cement" LLP (Shymkent City, Kazakhstan) was accepted as a binder for the investigated concrete mixtures according to [19].To confirm the compliance of the selected binder with the standards and requirements [20], a number of tests were carried out.The methods specified in these standards allow determining the following parameters of the binder: Fineness of grind: Cement fineness is defined as the residue on the sieve, with mesh No. 008 as a percentage of the original weight of the sieved sample (to the nearest 0.1%).Cement is considered to meet the requirements of the normative documentation if at least 85% of the mass of the test sample has passed through the sieve [21].The tested binder showed a grinding fineness of 92.8%.
Normal density and setting time of cement batter: The tested binder showed a normal density of 26.7% in the tests.The beginning of setting occurred after 2 h and 4 min, and the end of setting occurred after 4 h and 13 min from the moment of mixing with water.These values are within the normalized range.
Compressive and flexural strength (at the age of 28 days): When determining the strength characteristics of the studied binder, we showed the results at the age of 28 days: bending-5.3MPa; compression-41.1 MPa.These indicators are included in the normative area.
For the tests, we used sand from the manufacturer "Giyada" LLP (Almaty Region, Almaty city, Kazakhstan), which complies with the normative document [22].According to this standard, as fine aggregate for heavy concrete, under the definition of which will fall under the SCC, can be used sands with a maximum amount of dusty and clayey inclusions for the groups of increased coarseness, coarse, and medium in the amount of 3%.However, in order to obtain satisfactory characteristics of the concrete mix of the SCC, it is necessary to use sand, the amount of dust-like inclusions in which does not exceed 1.5%.The test for determining the amount of dusty and clayey inclusions of the considered sand was carried out by the method of scouring according to [21].According to the test results, the content of dusty and clayey inclusions in the tested sand was 1.07%.The coarseness modulus of the tested sand was also determined, which was 2.6.These indicators are acceptable for the use of the investigated aggregate in SCC.
Crushed stone of fractions 5-10 mm and 10-20 mm produced by Novtehstroy LLP (Almaty Region, Almaty city, Kazakhstan) with known physical and technical characteristics was used as a coarse aggregate [23].Grain composition of the aggregate meets the requirements of the normative document [24], which defines the basic requirements for crushed stone from dense rocks used as aggregate for heavy concrete, including SCC.
Chemical additive based on polycarboxylate esters of the 2nd generation AR Premium produced by "Arirang Group" LLP (Astana, Kazakhstan) was used as the hyperplasticizer [25], with the following characteristics (Table 1): As a water-soluble polymer additive, we used "polyvinylpyrrolidone 40.0", a product of "Laborfarma" LLP (Almaty, Kazakhstan) with a molecular weight of 1,200,000-2,500,000 g/mol and viscosity from 3000 to 6000 MPa s at 25 • C, meeting the requirements [26].
By its nature, it is a synthetic polymer, not naturally occurring.Well miscible with alcohols, water-containing solutions, and chloroform.Soluble in water and polar solutions.Almost incompatible with ethers.Molecular formula: C 6 H 9 NO; molar mass: 2500-2,500,000 g/mol; density: 1200 kg/m³; melting point: 150-180 • C.
As a reactive pozzolanic additive was used microsilica MKU-95, produced by Tau-Ken Temir LLP (Karaganda, Kazakhstan), corresponding to [27,28].Microsilica is formed by the reduction of high-purity quartz with coal in arc furnaces during the manufacture of silicon and ferrosilicon and consists of very fine spherical particles containing amorphous or glassy silicon dioxide (SiO 2 ) in an amount of at least 85% of the additive weight.The composition and characteristics are summarized in Table 2.
For volumetric reinforcement in the research, chopped basalt fiber was used, manufactured in accordance with [29] and produced by LLP "Priority" Astana (Kazakhstan).The appearance and physical and mechanical characteristics of these fibers are given in Table 3.The task of increasing the strength and performance of the studied compositions of selfcompacting concrete was solved by compacting its structure using a binder together with a complex modifier (hyperplasticizer AR Premium + polyvinylpyrrolidone 40.0 + microsilica MKU-95).To improve corrosion resistance, an active mineral admixture in the form of reactive microsilica was added to the concrete mix, and a basalt fiber was added to level the effect of bending moments.
In this test, the composition of self-compacting concrete of class C30/35 without the use of modifiers and fiber reinforcement [4] was taken as the control.
Tests to determine the required amount of each of the modifiers were carried out stepwise, starting from the control composition; with each subsequent test, hyperplasticizer AR Premium was introduced into the mixture at a rate of 1% of the binder per 1 m 3 , added to each subsequent test by 0.1% up to a rate of 2% per 1 m 3 ; then, the tests were repeated with the introduction of the polymer additive.Polyvinylpyrrolidone 40.0 at a rate from 0.1% of the binder per 1 m 3 was added to each subsequent test by 0.1% up to a rate of 1% per 1 m 3 and, further, with the introduction of microsilica MKU-95 at a rate from 5% of the binder per 1 m 3 , added to each subsequent test by 1% up to a rate of 15% per 1 m 3 .The maximum amount of modifiers in the mixture was fixed based on previously published sources and on economic assumptions [30].The optimum content of the complex admixture (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) was determined by the strength testing of 100 × 100 × 100 mm cube specimens tested at the age of 28 days of normal curing for each concrete composition.The compositions presented in Table 4 were used in further studies of physical-mechanical, hydrophysical characteristics, and corrosion resistance.
Concrete Compressive Strength
The compressive strength of concrete determines the resistance of the material to an applied mechanical compressive load.To determine the compressive strength of the investigated concrete compositions, three cube specimens of each composition with the dimensions of the working section 100 × 100 × 100 mm were prepared from the concrete mixture with the same water-cement ratio (W/C) and tested at the age of 28 days of normal curing according to the method by [31].
Determination of the Frost Resistance of Concrete
The method of repeated freezing and thawing in a water-saturated state was used for these tests.The test methodology and processing of the results were carried out in accordance with [15].Water was used as the saturation medium.Air medium with a freezing temperature of minus 18 ± 2 • C was used and water with a temperature of 20 ± 2 • C as the thawing medium.Determination of the concrete frost resistance mark was carried out on specimens with ribs 100 × 100 × 100 mm at the age of 28 days, and the number of control specimens (6 pcs and main specimens-12 pcs for each composition) was established.The main specimens were saturated with water by immersion for 24 h to 1/3 of the height of the cube specimens before freezing and the control specimens before strength testing.The next step required us to increase the level to 2/3 of the height and continue the soaking for 24 h; after which, we increased the water level so that the distance from the top edge of the samples to the liquid level was more than 20 mm and continued the soaking for the next 48 h.For this study, the testing regime was strictly observed-the time of freezing the samples not less than 2.5 h and thawing for 2 ± 0.5 h; in a case of chipping, cracking, and flaking of ribs in the process of testing the sample, the study was stopped.
Determination of Water Resistance and Water Absorption of Concrete
The wet spot method was used to determine the water resistance.In accordance with the requirements of [16], we prepared cylinder samples with a diameter of 150 mm and height of 150 mm, because the largest grain size of the aggregate was 20 mm.For each investigated composition, 6 specimens were prepared, which were stored in a chamber of normal hardening with a set temperature of 20 ± 2 • C, relative humidity of 95 ± 5%, and preliminary curing for 1 day in the laboratory before testing.The process of testing, in the form of increasing water pressure, was carried out by loading in steps of 0.2 MPa for 1-5 min and the duration of loading in each step equal to 12 h.Water resistance of each cylinder specimen was assessed by fixing the maximum value of water pressure at which no water seepage through the specimen body in the form of wet spots or signs of water filtration in the form of drops was observed on the end surface of the specimen opposite to its surface through which water was pressurized.To determine the water tightness of a series of concrete specimens, the maximum water pressure at which no water filtration was observed on at least four specimens out of six was evaluated.
The water absorption study was carried out according to the requirements of [17].For this purpose, we used cube samples with dimensions 100 × 100 × 100 mm, based on the largest grain size of aggregates of 20 mm, 3 samples of each composition, which were placed in a container with water at the temperature 20 ± 2 • C, so that the water level was higher by 50 mm above the top edge of the sample.Every 24 h of water absorption, the samples were weighed until two consecutive weighing differed by no more than 0.1%.The results were processed by determining the water absorption of a single concrete sample by weight (W m ) in percent and an error of no more than 0.1% according to Formula (1): where W m -water absorption of individual concrete specimen by mass, %; m c -mass of dried sample, g; m v -mass of the water-saturated sample, g.
To determine the water absorption of a series of samples of each concrete composition, the arithmetic mean of the obtained results of an individual sample was calculated.
Determination of the porosity of the concrete cube specimens with rib size 70 × 70 × 70 mm series of two specimens for each composition was carried out by water absorption kinetics, a discrete weighing method.
Determination of the Corrosion Resistance of the Concrete Specimens
The essence of the test method in accordance with [18] is a comparative analysis of the obtained results of the examination of test samples placed in a non-aggressive environment with the values of the indicators of samples of the same composition placed in an aggressive environment.For each formulation, 3 control samples and 3 basic samples were prepared for each aggressive medium investigated.The dimensions are based on the largest aggregate size and are set as recommended, with rib dimensions of 100 × 100 × 100 mm.Studies were carried out on cube specimens aged 28 days of normal curing in aqueous solutions of 5% sodium sulfate (Na 2 SO 4 ), 3% sodium chloride (NaCl), distilled water, and 0.01 M hydrochloric acid (HCl) solution.Bottled drinking water was used as a non-aggressive medium [32].The samples were dried and weighed before starting the studies.In the next step, three control specimens were tested for flexural tensile strength and compressive strength.The prepared three specimen prisms of each composition were placed in the above aggressive media, ensuring its uniform access to the specimen from all sides, with a study duration of 6 months.
Processing of the test results was carried out by comparing the properties of the specimens in an aggressive environment and specimens cured in drinking water.Firstly, the mass loss of samples (∆m) % was estimated by Formula (2): where ∆m-mass loss of samples, %; m 1 -mass of the specimen before the test, g; m 2 -mass of the specimen after the test, g.At the next stage, we needed to establish the change in tensile strength in bending (∆R_tb,%) and compressive strength (∆R, %) by Formulas (3) and (4): where ∆R-change in compressive strength, %; R 1 -compressive strength of the control specimens (before testing), MPa; R 2 -compressive strength of the main specimens (at the end of the test), MPa.
where ∆R tb -change in tensile strength in bending, %; R (1 tb) -tensile strength in bending of the control specimens (before testing), MPa; R (2 tb) -tensile strength in bending of the main specimens (at the end of the test), MPa.
Results
In the operation of products and structures made of self-compacting concrete, practical experience has shown that these structures are not durable and reliable enough.There are numerous cases of premature failure of some elements due to alternate freezing and thawing and corrosion of concrete, resulting in the need for costly repairs.It is necessary to take into account at once the factor of influence of an aggressive environment and the operating conditions of the structure, to choose the right raw materials [33].
Compressive Strength
The compression test results of the modified self-compacting concrete specimens are presented in Table 5 (Figure 1).The obtained data of the presented studies (Table 5) allow us to establish the following scientific observations: an increase in the compressive strength of composition 5 (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) by 21% relative to the control composition 1 and 20% relative to composition 2 (PC + 1.5% AR Premium) was established, which is in absolute terms by 7.7 and 7.4 MPa, respectively.When a complex modifier (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) is introduced into the concrete mixture, the processes of hydrolysis and hydration of cement particles are more intensive with the formation of additional crystallization centers, which is confirmed by earlier microstructural analyses of cement stone [34].-found that, by adding basalt micro-reinforcing fiber with a fiber concentration of 0.7% of the binder weight to the proposed composition 6, a slight increase in strength by 7.2% in relation to composition 5 (without fiber) is observed.It should also be noted that, when the fiber content increases above 1% of the binder, the clumping of fibers is observed, which negatively affects the strength of concrete.The obtained results are in agreement with the work [30].The obtained data of the presented studies (Table 5) allow us to establish th following scientific observations: an increase in the compressive strength of composition 5 (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) by 21% relative to the control compositio 1 and 20% relative to composition 2 (PC +1.5% AR Premium) was established, whic is in absolute terms by 7.7 and 7.4 MPa, respectively.When a complex modifier (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) is introduced into th concrete mixture, the processes of hydrolysis and hydration of cement particles ar more intensive with the formation of additional crystallization centers, which i confirmed by earlier microstructural analyses of cement stone [34].-found that, by adding basalt micro-reinforcing fiber with a fiber concentration of 0.7% of the binder weight to the proposed composition 6, a slight increase in strength b 7.2% in relation to composition 5 (without fiber) is observed.It should also be note that, when the fiber content increases above 1% of the binder, the clumping of fiber is observed, which negatively affects the strength of concrete.The obtained result are in agreement with the work [30].
Determination of Frost Resistance of Modified Self-Compacting Concrete
In the next stage of research, frost resistance was determined, which depends on th structure and nature of the pores in concrete.The relationship between porosity and fros resistance is a complex and interesting process, as evidenced by the use of an air entraining admixture in concrete by some researchers to create closed pores to increase it workability and frost resistance, but the amount of air-entraining admixture must b appropriate to maintain the strength of concrete.The presence of open pores available fo water penetration has a bad effect on frost resistance and durability [35].
The frost resistance of concrete is directly dependent on its structure, as it determine the volume and distribution of ice formed in the concrete body at sub-zero temperature and, consequently, the value of the resulting stresses and the intensity of the increasin weakening process of the structure.Concrete micropores of ≈10 −5 cm usually contai bound water, which does not convert to ice even at very low temperatures (down to −7
Determination of Frost Resistance of Modified Self-Compacting Concrete
In the next stage of research, frost resistance was determined, which depends on the structure and nature of the pores in concrete.The relationship between porosity and frost resistance is a complex and interesting process, as evidenced by the use of an airentraining admixture in concrete by some researchers to create closed pores to increase its workability and frost resistance, but the amount of air-entraining admixture must be appropriate to maintain the strength of concrete.The presence of open pores available for water penetration has a bad effect on frost resistance and durability [35].
The frost resistance of concrete is directly dependent on its structure, as it determines the volume and distribution of ice formed in the concrete body at sub-zero temperatures and, consequently, the value of the resulting stresses and the intensity of the increasing weakening process of the structure.Concrete micropores of ≈10 −5 cm usually contain bound water, which does not convert to ice even at very low temperatures (down to −70 • C).Therefore, micropores have no noticeable effect on the frost resistance of concrete, and it depends on the volume of macropores in concrete [36].
For frost resistance tests, the first basic method according to the requirements of [15] was used for repeated freezing and thawing in a water-saturated state.Determination of the frost resistance grade of concrete was carried out on specimens with 100 × 100 × 100 mm ribs at the age of 28 days.The results of the tests at cyclic alternate freezing and thawing are shown in Table 6.
The results of the tests at cyclic alternate freezing and thawing of the concrete cube samples of different compositions presented in Table 6 showed: a maximum weight loss up to 4.33% in the control composition 1 after 400 test cycles of alternate freezing and thawing, which exceeds the established indicators of the requirement [15] (weight loss not more than 2%).- The compositions containing the complex modifier (1.5% AR Premium + 0.3% polyvinylpyrrolidone 40.0 + 15% MKU-95) showed high frost resistance characteristics.After 600 cycles of tests, the mass loss in compositions 5 and 6 was 1.9% and 1.6%, which confirms the sufficient frost resistance reserve of the proposed compositions of modified self-compacting concrete.-ratio of the strength index of the sample after the frost resistance test to the strength of the material sample in the water-saturated state before the frost resistance determination.In frost resistance testing, the frost resistance coefficient K frost.= R frost./R c is used to determine the actual change in strength after a given number of cycles; R frost.-concrete strength after the accepted number of test cycles; R c -strength of control samples in a water-saturated state.The frost resistance grade of concrete is considered to be secured after the required number of cycles if K frost.> 0.95.
Determination of Water Resistance and Water Absorption of Concrete
When building structures from SCC in the conditions of an aggressive environment and contact with water, the concrete should be dense and have good hydrophysical properties: low indicators of water absorption, capillary suction, water resistance, and high indicators of frost and corrosion resistance.In works [37,38], the improvement of the hydrophysical properties of concrete with the use of modifiers, especially, which include ingredients of hydrophobic-plasticizing action by 20% and more, is noted.We have carried out standard tests according to the methodology described in Section 2.5 of the proposed modified self-compacting concrete compositions.The results of the water absorption and water resistance tests are presented in Table 7.
Analysis of the obtained data in Table 7 (Figure 2) shows that the concrete sample with the complex modifier together with basalt fiber (composition 6) reduced the water absorption index of the concrete by 56.5%, composition 2 by 37%, composition 3 by 47.8%, composition 4 by 52.2%, and composition 5 by 54.3% compared to the control (composition 1).
The water resistance of the modified SCC in composition 5) increased by 0.8 MPa or four grades (marks) in comparison to the control (composition 1).At the same time, the presence of basalt fiber in composition 6 did not affect its water resistance in comparison to composition 5.
Determination of the Corrosion Resistance
Solutions of salts of low concentration are not aggressive to SCC at the constant immersion of structures in them, but at alternate saturation and drying or at the capillary absorption of such a solution, the concentration of the solution in drying concrete and their crystallization in concrete pores (salt form of concrete corrosion) are possible.
The saturation of SCC with salts containing chloridiones (NaCl chlorides) may cause the corrosion of steel reinforcement of the structure, i.e., such salts are aggressive towards reinforced concrete.Based on the analysis of the causes of corrosion in concrete and the assessment of the degree of aggressiveness of the media in relation to reinforced concrete structures, it follows that corrosion resistance depends both on the conditions of interaction between concrete and the external environment and on the composition of the influencing aggressive solution [39].
In this work, we studied the effect on self-compacting concrete of different liquid aggressive media simulating all three types of corrosion according to [14].The obtained test results are given in Table 8.
Determination of the Corrosion Resistance
Solutions of salts of low concentration are not aggressive to SCC at the constant immersion of structures in them, but at alternate saturation and drying or at the capillary absorption of such a solution, the concentration of the solution in drying concrete and their crystallization in concrete pores (salt form of concrete corrosion) are possible.
The saturation of SCC with salts containing chloridiones (NaCl chlorides) may cause the corrosion of steel reinforcement of the structure, i.e., such salts are aggressive towards reinforced concrete.Based on the analysis of the causes of corrosion in concrete and the assessment of the degree of aggressiveness of the media in relation to reinforced concrete structures, it follows that corrosion resistance depends both on the conditions of interaction between concrete and the external environment and on the composition of the influencing aggressive solution [39].
In this work, we studied the effect on self-compacting concrete of different liquid aggressive media simulating all three types of corrosion according to [14].The obtained test results are given in Table 8.The results of the study of resistance to aggressive influences of SCC samples by changes in the average mass, tensile strength in bending, and compression (Table 8) showed that the proposed composition 6 has a high resistance to corrosion.When the specimens were kept in 3% NaCl solution, the mass loss ∆mcr was 0.091%, compressive strength ∆R av .Was 0.144%, and bending strength ∆R_tb was 0.131%, respectively.
During corrosion testing in 0.01 mol/L HCl hydrochloric acid solution, the following changes were observed in composition 6: decrease in the reduction of the average mass loss ∆m av. by 46.4%, compressive strength ∆R av. by 56%, and flexural tensile strength ∆R_tb by 65.6% compared to the control (composition 1), respectively.
When tested by leaching corrosion in distilled water, the following changes were observed in composition 6: decrease in the reduction of the average weight loss ∆m av. by 21.5%, compressive strength ∆R av. by 48.8%, and flexural tensile strength ∆R_tb by 49.6% compared to the control (composition 1), respectively.
When tested (Figure 3) for the aggressive effect of sulfate in 5% solution (Na 2 SO 4 ), the increased performance was inversely shown by the specimens (composition 6).Reductions in the mass loss ∆m av. by 40.3%, compressive strength ∆R av. by 38.4%, and flexural tensile strength ∆R_tb by 46.9% relative to the control (composition 1) were observed, respectively.The obtained test results are in agreement with the results of [33,40].
When tested (Figure 3) for the aggressive effect of sulfate in 5% solution (Na2SO4), the increased performance was inversely shown by the specimens (composition 6).Reductions in the mass loss Δmav.by 40.3%, compressive strength ΔRav.by 38.4%, and flexural tensile strength ΔR_tb by 46.9% relative to the control (composition 1) were observed, respectively.The obtained test results are in agreement with the results of [33,40].
Discussion
The results of compressive strength studies of SCC show a significant improvement in the strength properties in the modified formulations.For example, composition 5 containing 1.5% AR Premium, 0.3% polyvinylpyrrolidone 40.0, and 15% MKU-95 shows an increase in strength of 21% over control composition 1 and 20% over composition 2. This confirms the effectiveness of using these additives to improve the strength properties of concrete.
It is interesting to note that the basalt micro-reinforcing fiber in composition 6 has only a minor effect on increasing the compressive strength of concrete, increasing it up to 7% compared to composition 5 without fiber.This indicates that fiber only plays a positive role in improving the concrete flexural strength and deformation under bending loads.
The use of a complex modifier including 1.5% AR Premium, 0.3% polyvinylpyrrolidone 40.0, 15% MKU-95, and 15% microsilica leads to a number of positive effects.This includes a reduction in the water-cement ratio, an increase in strength according to the basic law of concrete strength, and an increase in density, which favors hydrophysical properties and corrosion resistance.The water-soluble polymer additive "polyvinylpyrrolidone 40.0" modifies the pore space of cement stone, thereby increasing its impermeability and frost and corrosion resistance [41].Microsilica increases the density and strength of cement stone and concrete based on it, as well as resistance to type I corrosion (leaching) due to the binding of Portlandite Ca(OH) 2 into water-soluble low-base calcium hydrosilicates [42].Basalt fiber increases the crack resistance by its micro-reinforcement, the result of which is the leveling and reduction of stress concentration in the concrete structure, particularly in the macro-defect zone [43].
The conducted hydrophysical tests show a decrease in water absorption of the complex modifier-modified SCC and an increase in its water resistance.
The received positive results of the tests confirm the fact of an increase of pozzolanic activity to free Ca(OH) 2 of microsilica MKU-95 contained in composition 6 together with basalt fiber possessing chemical resistance that speaks about the conformity of the results of the research with the theoretical assumptions about the occurrence of additional centers of crystallization and decrease of pore space in a body of concrete by reactions of active pozzolanic additives (active microsilica SiO 2 ); there is a process of binding of Ca(OH) 2 by active mineral fiber-SiO 2 -into a poorly soluble compound-calcium hydrosilicateaccording to the equation Ca(OH) 2 + SiO 2 + mH 2 O = CaO•SiO 2 •nH 2 O [44].
When studying the soils encountered in the south of Kazakhstan and their classifications, it is necessary to note those that has been a high degree of aggressive impact on concrete structures.For example, saline soils of brown soils have pH 2.8-3.5;gray-brown soils of the steppe zone are less aggressive, and their pH is in the range of 4.5-5.5.Loamy soils are composed of sodium bicarbonate (NaHCO 3 ) and sodium carbonate (Na 2 CO 3 ) and are classified as alkaline soils with an elevated pH greater than 8.5.Corrosion processes associated with the leaching of calcium hydroxide Ca(OH) 2 occurring in concrete under the action of soft water belong to the first group of corrosion.Corrosion of the second group includes corrosion processes associated with the process of the interaction of cement stone and calcium hydroxide with an aggressive medium, resulting in the formation of easily leachable and easily soluble calcareous compounds or an increase in volume, which is accompanied by a reaction: Ca(OH) 2 + CO 2 = CaCO 3 + H 2 O. Calcium carbonate (CaCO 3 ) is insoluble in water.Over time, it deposits in the pores of the cement stone, increasing the volume of the concrete and, as a result, increasing cracking and deterioration.However, calcium carbonate is able to further interact with the carbon dioxide in the water to form a soluble acidic salt, causing the carbon dioxide corrosion of concrete: CaCO 3 + H 2 O + CO 2 → Ca(HCO 3 ) 2 .
Sulfate corrosion is one of the most common types of chemical deterioration of cementbased building materials.In contact with concrete, sulfates actively interact with calcium hydroxide and aluminate constituents of cement stone.As a result of the reaction of sulfates with calcium hydroxide, gypsum is formed, the accumulation and increase of which in the pore space of concrete leads to its destruction.At a high concentration of sulfates in the liquid phase, an excess concentration of SO 4 −2 anions appears in the solution, which react with calcium cations: Ca 2+ + SO 4 −2 → CaSO 4 • 2H 2 O; the resulting gypsum is saturated with water and increases in volume during crystallization, which leads to the destruction of the cement stone.
In order to obtain complete data on the corrosion resistance of modified self-compacting concrete, we conducted experiments to identify the resistance of concrete to three types of corrosion according to the classification of V.M. Moskvin [45].The corrosion resistance studies showed that the complex modifier-modified SCC formulations have a high degree of resistance in aggressive environments and leaching corrosion.The use of modifiers allowed reducing the mass loss of the specimens during corrosion leaching up to 50%, loss of compressive strength ∆R av. up to 40%, and flexural tensile strength ∆R_tb up to 60% as compared to the control specimens without modifiers.
Conclusions
Modified by a complex modifier (hyperplasticizer + polymer + microsilica + basalt fiber), SCC compositions show high strength, frost resistance, and resistance in aggressive environments.The introduction of a certain amount of complex modifier increases the durability of the current SCC compositions, increasing water resistance by four grades up to W16, frost resistance up to F = 500, increasing the compressive strength by 20%, and reducing the mass loss of the samples during corrosion leaching up to 50%.This emphasizes the effectiveness of the proposed complex modifier in improving the properties of SCC, especially since the constituents of the complex modifier are inexpensive and readily available.Taking into account the results of the conducted work, it should be noted that complex modifiers for concrete would be further recommended for practical application in Kazakhstan.
Table 1 .
Technical characteristics of the AR Premium hyperplasticizer.
Table 3 .
Physical and mechanical characteristics of chopped basalt fiber.
2.2.Investigated Compositions of Modified Self-Compacting Concrete with Regard to the Consumption and Selection of Raw Materials
Table 4 .
Investigated compositions of modified self-compacting concrete.
Table 5 .
Compressive strength test results of self-compacting modified concrete.
Table 6 .
Results of concrete testing under cyclic alternating freezing and thawing.
Note: K frost.
Table 7 .
Results of the water absorption and water resistance tests.
Table 8 .
Resistance to corrosive environment of SCC samples by change of the average mass,
Table 8 .
Resistance to corrosive environment of SCC samples by change of the average mass, compressive strength, and flexural tensile strength. | 9,353.4 | 2024-05-28T00:00:00.000 | [
"Engineering",
"Materials Science",
"Environmental Science"
] |
Hypoxia-Inducible Factor Is Critical for Pathogenesis and Regulation of Immune Cell Functions in Rheumatoid Arthritis
Rheumatoid arthritis (RA) is a common autoimmune disease with characteristics of synovial inflammation, pannus formation, cartilage destruction, and bone erosion. Further, the inflammation is linked to increased oxygen consumption, resulting in hypoxia within the inflammatory area. Hypoxia-inducible factor (HIF) was reported to be associated with adaptation to the hypoxic microenvironment in the RA synovium. Here, we have briefly summarized the structure and expression of HIF. Moreover, the function of HIF in inflammation, angiogenesis, cartilage damage, and immune cells of RA has been discussed.
INTRODUCTION
Rheumatoid arthritis (RA) is the most common chronic inflammatory disease, with characteristics of synovial inflammation, pannus formation, cartilage destruction, and bone erosion, which ultimately cause deformity of the affected joints (1). While the etiology and pathogenesis of RA have not been clearly elaborated, studies have shown that both environmental and genetic factors are involved in its etiology (2). Additionally, different types of immune cells have been shown to be involved, including macrophages, dendritic cells, T cells, B cells, neutrophils, and mast cells. Apart from the complex interactions between immune cells in such area, the microenvironment of synovial fluid is composite in which newly-formed highly dysfunctional straight and regularlybranching vessels resulting in hypoxia and reduced oxygen supply. According to recent studies, oxygen tension is associated with cell proliferation, division, and survival, which is considered relevant to the pathogenesis of RA (2). In particular, synovial hypoxia, defined as low oxygen tension in the synovium, is a potential pathogenetic factor and plays a crucial role in promoting angiogenesis as well as the pathophysiological response in RA (3,4). In response to the alterations of oxygen tension in the inflamed joint micro-environment, hypoxia-inducible factors (HIFs) are activated and overexpressed to regulate the transcription and expression of genes related to inflammation, angiogenesis, energy metabolism, and other processes (4). Moreover, HIF is a type of nuclear transcription factor that can stimulate angiogenesis, promote pannus formation, and aggravate synovial hyperplasia (5,6). Additionally, recent studies have shown that HIF plays an important role in adaptation to hypoxic environments.
STRUCTURE AND ACTIVATION OF HYPOXIA-INDUCIBLE FACTORS (HIFS)
HIF is a heterodimeric complex, consisting of α subunits (HIF-1α, HIF-2α, or HIF-3α) and a β subunit (also known as aryl hydrocarbon receptor nuclear translocator or ARNT). Expression of the α subunit is regulated by oxygen concentration in the cytoplasm, while the β subunit is constitutively expressed in the cell nucleus (7,8). Both α and β subunits, which contain a basic helix-loop-helix (bHLH) domain and a PAS-A and PAS-B (Per, Arnt, and Sim; PAS) domain, could combine as a heterodimeric complex to bind to the hypoxia response element (HRE) within specific sequences of the promotors of target genes. Under physiological tissue oxygen tensions, HIF-1α is targeted for degradation by hydroxylation of specific prolyl residues (Pro402 and Pro564) within the ODD domain, which is catalyzed by specific enzymes prolyhydroxylases (PHDs), accompanied by ferrous iron (Fe 2+ ) as an enzymatic cofactor. PHDs are active when oxygen is available, targeting HIF-1α for proteasomal degradation via the Von Hippel-Lindau tumor suppressor protein (pVHL)-dependent ubiquitination, however, their activity declines in hypoxic conditions (9,10). Apart from O 2 and Fe 2+ serving as cofactors, both degradation pathways require a-ketoglutarate as a co-substrate (11).
Under hypoxia conditions, posttranslational hydroxylation modification of these two proteins is inhibited, resulting in stabilization of the HIF-α levels due to the decreased activity of PHDs and FIH, leading to a low affinity between HIF-α and pVHL (14). As for the activity of FIH and PHD, a survey suggests that the PHD inhibitors may only partially up-regulate the HIF transcriptional response, and the biochemical analysis revealed that FIH activity is inhibited at lower oxygen tension than PHD. Thus, PHD activity may reduce first with decreases in oxygen levels, leading to the accumulation of HIF-α in the cytoplasm (15).
Oxygen-independent factors may also induce and activate HIF-α. Additionally, heat, low pH, and biochemical factors such as cytokines, growth factors, and reactive oxygen species (ROS), may play a vital role in the induction and activation of HIF. Further, it has been reported that bacterial lipopolysaccharides may have the ability to induce HIF-1α in human macrophages and monocytes via nuclear factor-kappa B (NF-κB) and p44/42 mitogen-activated protein kinase (MAPK) pathways (4). Moreover, the accumulation of tumor necrosis factor-α (TNFα) within the injury area has been shown to promote HIF-1α accumulation in initial inflammatory cells, with no changes in its transcription level.
HIF Expression in RA Joints
HIFs are more highly expressed in the hyperplastic RA synovium, which mainly include macrophage-like synoviocytes (MLS) and fibroblast-like synoviocytes (FLS) (16,17), than that in osteoarthritis (OA) patients (18). However, the degree of different HIF isoforms expression has begun to be elucidated. Of note, HIF-1α is reported to be strongly expressed in the intimal layer of the RA synovium, including in resident macrophages (6). By contrast, some studies showed that the expression of HIF-1α is sparse, while HIF-2α is the predominant isoform in both human RA synovium and the collagen-induced arthritis model (CIA). Further, the expression of HIF-2α is mainly observed on FLS of the RA synovium (19). Moreover, HIF-1α and HIF-2α are expressed in resident and infiltrating immune cells, as well as chondrocytes and osteoclasts (19).
Role of HIF in Inflammation
An increasing number of reports has revealed that HIFs act as key regulators in RA inflammation, as they can mediate this inflammation by different aspects (Figure 1). For example, hypoxia-related epithelial-mesenchymal transition (EMT) has been observed in the FLS through the PI3 kinase/Akt/HIF-1 pathway (20). In particular, one key inflammatory cascade, TNF, is overexpressed (10). Toll-like receptors (TLR), the typical recognition receptors, are also mainly observed in immune cells and rheumatoid arthritis synovial fibroblasts (RASF) in RA to regulate the inflammatory response (21, 22). Kim et al. stated that the HIF-1α-related pathway was able to up-regulate TLR-4 expression in macrophages, activating inflammation. Further, TNF-α converting enzyme, TNF-α release, and TLR are all HIF-1-dependent processes in RA disease. As proinflammatory factors, TNF-α, interleukin 1 (IL-1), and IL-33 production are increased in affected synovial fluid and tissue. Thus, TNF and IL-1 may function synergistically to induce effector functions. Additionally, IL-33 levels are increased in RA patients, showing a role of this inflammatory factor in the severity of RA (23). As previously shown, TNF-α is a key factor that promotes HIF accumulation. Further, IL-1 and IL-33 have been shown to increase the expression of HIF isoforms in synovial fibroblasts (19,24). Accordingly, the ability of the HIF-1α/IL-33 regulatory circuit to further increase HIF-1α is formed by the enhanced expression of IL-33 correlated with HIF-1α up-regulation. Hu et al. (25) demonstrated that HIF-1α overexpression was likely to stimulate inflammatory cytokine expression in polyIC-stimulated RASF, leading to the shift toward a pro-inflammatory state in RA. Besides its pro-inflammatory function, HIFs can activate production of anti-inflammatory cytokines, such as IL-10. Meng et al. showed that mice with a specific HIF-1α deletion displayed a reduced number of IL-10 production B cells, followed by enhanced Th17 cells, which ultimately exacerbated the collagen-induced arthritis (CIA).
Role of HIF in Angiogenesis
In RA, the sub-intimal layer of the synovium is heavily infiltrated by immune cells and undergoes neovascularization. Neovascularization is generally connected to hypoxia in both physiological and pathological states (26,27). Angiogenesis, a complex process of new blood-vessel formation as well as expression of multiple genes, is a typical characteristic of RA and likely a consequence of hypoxia in affected joints (10). Numerous genes have been shown to be involved during different steps of angiogenesis by hypoxia challenge. Further, expansion of the synovial tissue is essential for the newlygenerated vessels to supply oxygen and adequate nutrients in the hypertrophic synovium. However, the oxygen supply via dysregulated vasculature is inadequate. Moreover, ROS generation may further enhance damage within joints. Fava et al. (28) and Koch et al. (29) demonstrated that HIFs were capable of regulating the expression of proangiogenic mediators, including chemokine IL-8 (CXCL8), CC-chemokine ligand 20 (CCL20, known as macrophage inflammatory protein 3α, MIP-3α), and vascular endothelial growth factor (VEGF) (30). Among these, VEGF, the prime angiogenesis target of HIF-1α with hypoxiadependent expression, acts as the most potent endothelialspecific mitogen in RA. VEGF is a cytokine that acts on the vascular endothelium of the synovium, promoting angiogenesis and binding to cognate receptors on endothelial cells (ECs), which activates these cells to produce more proteolytic enzymes. Compared with normal synovium, angiopoietin (Ang)1, Ang2 as well as tyrosine kinase receptor (Tie)2 are also highly elevated in RA. With the model of adjuvant arthritis, the upregulation of PI3K, Akt1, p-Akt1, and mammalian target of rapamycin (mTOR) indicate that the activation of PI3K/Akt/ mTOR signaling may participants in the induction of new-formed highly dysfunctional synovial blood vessels (Figure 1). Further, inhibition of HIF-1 expression could significantly reduce the VEGF-induced angiogenesis in FLS of RA (28). In RA synovial tissue, inflammation contributes to the hypoxic environment, and HIF-1α is dramatically increased to promote cells to be more tolerant of low oxygen tension (31). Simultaneously, the expression of HIF-1α and VEGF in synovial tissue is influenced by angiogenesis (31), and HIF-1α and HIF-2α isoforms are expressed in the RA synovium at levels related to the magnitude of the angiogenetic response (32).
Role of HIF in Cartilage Damage
Articular cartilage destruction and bone erosion are vital features of RA disease. With the progression of inflammation, hyperplasia pannus invades and destroys the cartilage of RA-related joints. HIF is a key factor for cell adaptation to hypoxia in RA. The activation of HIF-1α in cartilage maturation has been demonstrated as necessary for chondrocytes survival and homeostasis. Notably, HIF-1α is a key component that regulates the inhibition of cartilage hypertrophy and the maintenance of chondrogenic specific markers, such as aggrecan, SOX9. The dysregulation of HIF-1α would lead to skeletal dysplasia. In vivo experiment showed that expression of HIF-1α suppresses NF-κB-HIF-2α signaling, which is the potent upstream pathway of MMP13. Proteinases, such as matrix metalloproteinases (MMPs) and ADAMTS, known as a disintegrin and metalloproteinase with thrombospondin motifs, can cause a direct damage to the cartilage. The family of MMP includes cartilage-degrading enzymes produced by the type B synoviocytes in RA. Expression of MMPs is elevated during the repair and remodeling of damage tissues. Moreover, tissue inhibitors of metalloproteinases (TIMPs), serine proteinase inhibitors (SERPINS), and α2-macroglobutin can regulate the activity of MMPs (33). Accompanied by HIF-1, the expression of the IL-1β-stimulating-MMP1/MMP13 and the IL-17/TNF-αpromoting -MMP2/MMP9 result in the greater migration and invasive ability of RA FLS (34).
Due to activation of osteoclasts under hypoxia, articular cartilage destruction is exacerbated. Swales et al. (35) reported that angiopoietin-like 4 (ANGPTL4) is overexpressed in RA osteoclasts in a HIF-1α-dependent manner, stimulating bone resorption mediated by osteoclasts.
EFFECT OF HIF ON IMMUNE CELLS IN RA HIF and T Cells
Makino et al. (36) proved that low oxygen tension is able to promote the survival of T cells by stabilizing HIF-1α. T cells have been hypothesized to be involved in disease pathogenesis. Apart from circulating in the bloodstream, they exist in areas of oxygen tension, as low as 0.5%. Foxp3 is the typical marker of regulatory T cells, and previous studies have proven that hypoxia is a factor that increases Foxp3 expression on T cells, whereas HIF-1α functions conversely compared to the low oxygen levels (37). Except for Treg cells, evidence has emerged to identify that other T-cell subsets with various functions exist, including Th1, Th2, Th17, and Tfh cells (38). For example, overexpression of HIF-1 enhances the RA synovial fibroblast-mediated expansion of inflammatory Th1 and Th17 cells. Within the inflamed joint of RA, low oxygen level is responsible for the upregulation of HIF-1 in naïve CD4 + T cells. Moreover, metabolic reprogramming in T cells can be induced by low oxygen levels. HIF-1 is able to activate the expression of Th17 key regulator RORγt by mTOR signaling, while HIF-1 downregulate Foxp3 protein levels. It is believed that the upregulation of HIF-1 has a greater impact on Th17 differentiation. However, the role of HIF-1 in Th17/ Treg balance remains to be established. mTOR acts as an important sensor for the T-cell response as well as interactions with other metabolic pathways. Also, mTOR spurs on glycolysis via the upregulation of HIF-1 (39) (Figure 1). Further, Foxp3+ Treg cells have been shown to perform converse functions to Th17 cells, which may be correlated with the hypoxic environment in RA. Other studies also highlighted that differentiation to Th17 cells was associated with the expression of the HIF-1α-driven glycolytic genes (40).
HIF and Macrophages
Extensive studies of innate and adaptive immune cells, in which the role of monocyte and macrophages are all discussed, have been done. Monocytes are essential in the initiation and maintenance of synovial inflammation in RA.
Circulated monocytes are recruited to the RA synovium via chemotaxis interacting with chemotactic ligands presented by other autoimmune cells within the synovium. On the one hand, monocytes can differentiate into specific pro-inflammatory subsets. On the other hand, it is capable to differentiate into macrophages to promote synovial inflammation. Ampactivated protein kinase (AMPK) is an energy-sensing enzyme in macrophages. It counters metabolic changes induced in pro-inflammatory macrophages by stabilizes the inhibition of IκBα, which antagonizes nuclear factor κB (NF-κB) signaling. In particular, AMPK activity is high in M2 macrophages which is able to drive the production of anti-inflammatory cytokines. Under hypoxia microenvironment, the expression of HIF-1α is upregulated in M1 macrophages rather than M2, regulating the metabolic switch. Macrophages are considered key inflammatory factors within the synovium in RA. As is wellknown, the oxygen tension of RA patients is equivalent to 2-4%, representing the hypoxic microenvironment. There are two types of macrophages: M1 and M2, for which M1 macrophages are able to kill intracellular microorganisms, while M2 macrophages exert anti-inflammatory effects (41). M1 phenotypes are associated strongly with tumors. Conversely, knockout of HIF-1α skewed cells toward a more M2 phenotype, accompanied by HIF-1 and HIF-2 in the hypoxic response. As well, increasing studies have proven HIF-2α to be the predominant isoform in this process. Under hypoxia, oxidative phosphorylation is inhibited, which causes macrophages to shift to glycolysis to produce ATP. HIF-1α is able to regulate the expression of the transcriptional glycolysis-related enzymes and inhibit oxidative phosphorylation. Noteworthy, macrophages activate HIF-1α to promote the production of inflammatory factors (Figure 1), while avoiding excessive inflammatory activation by inhibiting NF-κB.
HIF and Other Myeloid Cells
In terms of other myeloid cells, HIF is involved in the regulation of neutrophil apoptosis. Mecklenburgh et al. (42) inhibited the HIF-1α degradation pathway using an ion chelator and found that hypoxia regulated the NF-κB pathway in a HIF-1α-dependent manner. Further, hypoxia was found to promote neutrophil secretion of MIP-β (Macrophage inflammatory protein-1β), leading to neutrophil survival and continued secretion of inflammatory factors.
Under hypoxia conditions, the expression of over 2,000 genes was shown to be induced during monocyte differentiation into dendritic cells (DCs) (38). Conventionally, DCs are antigen-presenting cells (APC), connecting innate and adaptive immunity. However, under hypoxic conditions, DCs induce high inflammation while decreasing migration (43). With increased expression of CCL5 (C-C motif chemokine ligand, DCs are able to force granulocyte migration to the infection area, while the ability of DCs to capture antigens is inhibited. Moreover, the expression of TREM1, a hypoxia-inducible gene that encodes a protein that can amplify the immune response, was detected on DCs isolated from the synovium of patients with arthritis (38). With hypoxia promoting glycolysis, the metabolism of DC is reprogramed from oxidative phosphorylation (OXPHOS). Both HIF-1α and mTOR are central regulators of the metabolic switch in DCs and a wide range of downstream targets could facilitate this progress. However, more studies are needed to dissect the role of the complex of mTOR-HIF-1α pathway in DCs (44).
DISCUSSION
Although the pathogenesis of RA remains to be completely understood, accumulating evidence supports that HIF and hypoxia are vital factors for its pathophysiological characteristics, including inflammation, cartilage damage, angiogenesis, etc. Synovial hypoxia, which can modify the metabolic environment, is linked to some pathogenic processes in RA through direct and indirect effects, for which HIF is an essential factor. Therefore, elucidating the expression of HIFs within RA joints will allow us to have a better understanding of their activation, and the mechanisms by which they affect specific cells to contribute to RA progression. Experimental and clinical data have demonstrated a hypoxia-induced up-regulation of HIF within the synovium. Therefore, HIF is a factor that could promote the inflammation and onset of RA. Moreover, cell metabolism varies with oxygen tension in the microenvironment, possibly through the HIF "switch." Hypoxia is not a distinctive feature of RA, as it can be found in various diseases, especially in inflammatory autoimmune diseases. Moreover, some studies prove that the consequence of blocking angiogenesis pathways may reduce cellular infiltrates as well as decrease the joint damage. For example, tofacitinib, an inhibitor of JAK1 and JAK3, is able to inhibit HIF-1α signaling resulting in the progression of disease. Otherwise, molecules that promote, rather than inhibit, the activity of PHD2, are more likely to promote the HIF-1α signaling and its proteasomal degradation. Therefore, synovial angiogenesis and leukocyte infiltration are decreased (45). However, the exact function of HIF in autoimmune disease pathogenesis or disease development remains unclear, and whether HIFs play different roles during different stages of disease needs to be clarified.
Overall, further studies will be required to clarify the relationship between the pathophysiology of RA and HIF, in order to discover novel therapies to treat RA or other autoimmune diseases with great therapeutic efficacy and stability following systemic administration.
AUTHOR CONTRIBUTIONS
XG and GC wrote and edited the manuscript. All authors contributed to the article and approved the submitted version. | 4,104.8 | 2020-07-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
MODIFIED A * ALGORITHM IMPLEMENTATION IN THE ROUTING OPTIMIZED FOR USE IN GEOSPATIAL INFORMATION SYSTEMS
Among the main issues in the theory of geometric grids on spatial information systems, is the problem of finding the shortest path routing between two points. In this paper tried to using the graph theory and A* algorithms in transport management, the optimal method to find the shortest path with shortest time condition to be reviewed. In order to construct a graph that consists of a network of pathways and modelling of physical and phasing area, the shortest path routes, elected with the use of the algorithm is modified A*.At of the proposed method node selection Examining angle nodes the desired destination node and the next node is done. The advantage of this method is that due to the elimination of some routes, time of route calculation is reduced.
INTRODUCTION
With the growth of cities, especially in big cities, need to an effective plan to find ways to avoid wasting money and time.Among the issues that will be helped in this context is the routing and find the shortest routing path.To solve this problem, algorithms like Dijkstra's algorithm, A*, Bellman-Ford, Floyd Varshal, Johnson and artificial intelligence techniques, etc. provided.Dijkstra method to find the shortest path, all vertices are examined which can be time-consuming and more expensive.Dijkstra's algorithm is a graph traversal algorithms for the shortest path problem for weighted graphs, solve that do not have the edge with a negative weight.ArcGIS software as one of the most popular GIS software uses of the algorithm for routing.[ YANG Yu-Jun, 2006] There is a graph consists of a set of nodes and edges in the routing problem.Each edge has weight that it is proportional to its features.Set consists of two components of a graph G (V, E) that V is a finite set and non-empty and E is defined as a binary relation on V. [Boundy, J.A. and U.S.R. Murty, 1999] V is the set of vertices in a graph and E consist the edges of the graph.The purpose of routing find a path that can meet the needs of the problem.[ J. Saberian,M Hamrahm,88]
A*ALGORITHM
Dijkstra's algorithm is a graph traversal algorithms by Dutch scientists, computer science, Dijkstra was introduced in 1959.In this algorithm, the shortest path from the start point to the weighted graphs with no negative weight edges resolved, and finally by creating the shortest path tree, the shortest path from the start point to all vertices gives a graph calculate .Also this algorithm to find the shortest path from start to destination vertex thus use it while running the algorithm find once the shortest path from start to destination, the algorithm stopped.
Dijkstra's algorithm in its simplest implementation, the data is stored in an array so that the minimum value of d for the vertices outside the set S (a set of vertices whose shortest path from the start to the specified weight) with a linear algorithm finds.In this case time complexity will be as equation 1 that if used in the vicinity of the listed priority queue, the time complexity of the algorithm is heals as the equation 2.
o(|v| 2 + |E|)
(1) One problem of this algorithm is to find the shortest path algorithm, all vertices in all directions during the long process of review and then reaches the destination node.A* algorithm is computer algorithm that used widely in the navigation graph and finding a path between two nodes, this algorithm is in fact a distribution of Dijkstra's algorithm which uses innovative algorithms to achieve better performance compared to time.Due to the rapid performance and precision of this algorithm, using this algorithm in various branches of science depends expanded.This algorithm uses the idea of the first algorithm that is best and the shortest path between two nodes that the given stat and destination with other nodes find.This method with combining a function g(n) and h(n) is evaluated nodes.(Equation 3) Where g(n) = the cost of reaching to node n. h(n) = cost of reaching the node n to goal node.Unlike Dijkstra method, computation time, isn't the main drawback of A* algorithm but since this algorithm keeps all nodes in memory products, usually take more than a memory with a little time.[ Dijkstra, E. W,1959] A* search method is a combination of Uniform cost search and a greedy search method.Method A* fast greedy technique to achieve results and optimized uniform cost method to find the answer together and devoted their search for purpose.Contrary Dijkstra's algorithm, is that all vertices in all respects to check, A* algorithm only points on the graph where the vertices are located the start and destination examine.( Equation4) Where h* (x) = the actual cost of reaching the node n to the goal.
A* ALGORITHM IMPLEMENTATION AND OPTIMIZATION OF THE METHOD DIRECTION
The A* algorithm be the function f(n) = g(n) + h(n) calculate.h(n) estimates the distance to the destination, which can be calculated from equation 5 By calculating the Euclidean distance between the current node and the destination node, we can calculate an estimation distance that always is less than or equal to the actual distance.[ Chen Hong-ying , Xiao Ting, Wang Tao, He Jin-yi, 2012] g(n) is cost function travelled to the edge and over the edge plus the cost prior to the current edge.This amount represents the cost for each node is traversed from the start node to the current node.f(n) represents the cost of the current node travelled plus estimated cost from the current node to the destination node.The node which has the minimum value f (n) is much better candidate to achieve the target.
Each node in the lowest f (n) is chosen and developed and then is compared with a list of the current node to the specified node is still better candidate or not.Otherwise, the better node than the node list is selected.
As we move into the direction of the destination, not the destination path in the opposite direction, causing more of h(n) of the current node are, therefore, most likely not a good candidate to be the best route.
It can be use from methods that removed the destination paths that are far from the navigation list.The proposed method has been implemented in this study, the next node to the destination node can be done by calculating the angle.Experience shows that the best measure of the angle of approach, is angle of 180 degrees.In other words, the nodes in the opposite direction to the destination node is not appropriate to continue the path.
To obtain the angle less than 180 degrees angle for navigation edges have been using the following equation 6: Flowchart in Figure 2 is shown implementation steps A* algorithm.
MODELING USING THE A * ALGORITHM IS APPLIED IN THE HOLY CITY OF MASHHAD (CASE STUDY)
The introduction of the A* algorithm, VB.Net 2012 IDE engine ArcEngine10.1 map is applied.Before using this algorithm in the network layer streets and intersections is prepared in the ArcGIS environment and on the edges were applied the networking that the nodes and edges create.We used the network for faster navigation in adjacent nodes.Figure 3 is an example of running the program on the part of the city of Mashhad in the show.
ASSESSMENT METHODS
For evaluate the algorithms A* and Dykstra, two different areas were studied in Mashhad.Evaluation of the two methods was examined in terms of computing time in milliseconds on a computer with Corei3 processor and the number of nodes traced by the algorithm.There are results in Tables 1-1 and 1-2 As seen in tables 1 and 2, the time to answer Dijkstra algorithm vs. the modified A* algorithm is proportional to the extent of the selected area that increased exponentially with the number of nodes that need to be investigated.In the modified A* algorithm, when distance is approximately three times, the number of examined nodes is almost four times, while in Dijkstra algorithm will be approximately 8.5 times.
CONCLUSIONS AND RECOMMENDATIONS
Dijkstra's algorithm calculate the shortest distance from node to node that is not suitable for systems that have a large number of nodes.For a larger cities, there are many of nodes, which Dijkstra algorithm requires a lot of time and cost.However in term of shortest path, Dijkstra algorithm give better results than modified A* algorithm but this improvement just is approximately 10 to 15 percent while it has less important versus of the processing time, especially in the wide area.A* algorithm is a heuristic algorithm to search for a new route uses heuristic information.The lower nodes and therefore less time is required.For improve the effectiveness of method A* is used from the restriction direction, so with it eliminated nodes that are moving in the opposite direction and further the scrolling speed rises.In future studies, determine the angular size of the proposed approach and method and even floating point values for the various routes.
Figure 1 :
Figure 1: Angle and distance in graphs Figures 4 shows,path and investigation nodes by the modified A* algorithm and Dykstra.
Table 2 .
with two different route.Result of assessment in second area Figures 4 shows, | 2,135 | 2014-10-22T00:00:00.000 | [
"Computer Science",
"Geography"
] |
Transcription and FACT facilitate the restoration of replication-coupled chromatin assembly defects
Genome duplication occurs through the coordinated action of DNA replication and nucleosome assembly at replication forks. Defective nucleosome assembly causes DNA lesions by fork breakage that need to be repaired. In addition, it causes a loss of chromatin integrity. These chromatin alterations can be restored, even though the mechanisms are unknown. Here, we show that the process of chromatin restoration can deal with highly severe chromatin defects induced by the absence of the chaperones CAF1 and Rtt106 or a strong reduction in the pool of available histones, and that this process can be followed by analyzing the topoisomer distribution of the 2µ plasmid. Using this assay, we demonstrate that chromatin restoration is slow and independent of checkpoint activation, whereas it requires the action of transcription and the FACT complex. Therefore, cells are able to “repair” not only DNA lesions but also chromatin alterations associated with defective nucleosome assembly.
Results
Defective RC-nucleosome deposition causes transient changes in DNA topology and chromatin structure of the 2µ plasmid. Partial depletion of histones causes a dramatic loss of chromatin integrity that is associated with a loss of negative supercoiling 38 . This loss of negative supercoiling is due to the fact that the assembly of each nucleosome introduces one negative superhelical turn 39 . This topological change can be detected by analyzing the distribution of topoisomers of a plasmid in chloroquine-containing gels, and has been extensively used to address chromatin alterations in vivo and in vitro 36,38,[40][41][42][43][44] . Specifically, the loss of negative supercoiling in histone-depleted yeast cells can be detected by analyzing the endogenous 2µ plasmid in a strain in which the only source of histone H4 is under control of the doxycycline-regulated tet promoter (t::HHF2 strain; Fig. 1A) 36 . The topological behavior in response to histone depletion of the 2µ plasmid is similar to that displayed by a centromeric plasmid, but it is more sensitive because its multicopy nature 38 . The 2µ plasmid is organized as two unique regions separated by inverted repeats (FRT sites). These repeats can recombine leading to equal amounts of two plasmids that differ in the orientation of one unique region with respect to the other (Fig. 1A, left panel). Although the plasmid is replicated through a canonical semiconservative mechanism from the origin, this recombination system helps to maintain the copy number by a DNA amplification mechanism that leads to rolling circle replication intermediates 45 . To focus on the nucleosome-associated topological changes, only the distribution of the monomeric forms is analyzed 36,38,44 .
The aforementioned DNA supercoiling analyses were carried out in asynchronous cultures. To understand the cell cycle dynamics of these topological changes, cells were synchronized in G1 and released into fresh medium under conditions of HHF2 repression (0.25 µg/ml dox) (Fig. 1B). Whereas the distribution of topoisomers was similar along the cell cycle in the wild type strain, the t::HHF2 mutant displayed wild-type topological levels in G1 and a strong but transient loss of negative superhelical density from early S phase to G2/M (Fig. 1B).
To confirm that this transient defect in DNA topology was associated with the process of RC-nucleosome assembly, t::HHF2 cells were synchronized in G1 and released into S phase in the absence of Cdc6, which is essential for replication initiation but not for further cell-cycle events 46 . As shown in Fig. 1C, DNA replication was required for the transient loss of negative supercoiling in histone-depleted cells. To further demonstrate that the effect of histone depletion on DNA topology is a consequence of a defect in the process of RC-chromatin assembly, we analyzed the distribution of topoisomers during the cell cycle in a cac1∆ rtt106∆ mutant. This mutant also displayed wild-type topological levels in G1 and a transient loss of negative supercoiling during S-G2/M (Fig. 1B). The main difference was that the recovery of negative supercoiling was faster in cac1∆ rtt106∆ than in t::HHF2 cells.
These results suggest that the alterations in chromatin structure induced by defective histone deposition occur transiently during S phase and are post-replicatively restored. To confirm this, we analyzed the pattern of nucleosomes in the 2µ plasmid by indirect-end labelling of MNaseI-treated cells at different times during the cell cycle. We focused on the chromatin structure of an EcoR1 fragment containing the FLP1 and REP2 genes (Fig. 1A, left panel). The t::HHF2 and cac1∆ rtt106∆ mutants displayed a much more altered chromatin structure 60 min after G1 release than in G1, and these alterations were partially restored 60 min later (Fig. 3D). These time points correspond to G2/early mitosis and late mitosis, respectively, as determined by FACS, cell morphology and nuclei staining ( Fig. 1D; note that the t::HHF2 and cac1∆ rtt106∆ mutants accumulate at metaphase due to checkpoint activation 7,47 ). It is worth noting that t::HHF2 and cac1∆ rtt106∆ shared similar chromatin alterations; high accessibility of the nucleosomal DNA and similar modified bands. www.nature.com/scientificreports/ Chromatin assembly defects in cac1∆ rtt106∆ are largely restored genome wide. We asked if the loss and further recovery of chromatin integrity of the 2µ plasmid in nucleosome-deposition mutants reflected a genome-wide process. For this, we performed high-throughput sequencing of MNase I-digested chromatin (MNase-seq) followed by dynamic analysis of nucleosome position and occupancy by sequencing (DANPOS) 48 , which allows nucleosomes to be mapped along the whole genome. We analyzed the nucleosomal landscape of cac1∆ rtt106∆ and wild type cells both in G1 and G2 phases (60 min after G1 release) to allow completion of genome replication. The absence of CAF1 and Rtt106 during DNA replication caused severe defects in the distribution of nucleosomes in G2 ( Fig. 2A). This loss of chromatin integrity became particularly evident by a strong reduction in the amplitude of the nucleosomal oscillation in G2, which indicates a loss of nucleosome phasing (Fig. 2B, G2 panel). This chromatin defect was less severe in S than in G2 phase (compare panel G2 in Fig. 2B with Fig. S2A), consistent with an accumulation of affected genes as replication is completed. Nucleosome positioning became slightly better defined in G1 in the wild type strain, especially around the NFR where nucleosomes − 1 to + 3 increased their occupancy (Fig. 2B, wt panel). The similarity in the nucleosomal profiles of G2 and G1 was due to the fast maturation of the newly assembled chromatin during S phase 25,27,28,31 . In contrast, the cac1∆ rtt106∆ mutant showed a significant drop in the occupancy of nucleosome − 1 in G1 relative to the wild type strain (Fig. 2B, G1 panel). Indeed, the analysis of individual genes revealed a loss not only of this nucleosome but also from gene body nucleosomes in multiple genes in G1 (Fig. S2B). This phenotype is likely related to the replication-independent, transcription-dependent role of Rtt106 preventing www.nature.com/scientificreports/ spurious transcription and maintaining promoter fidelity by histone replacement 49,50 . Apart from this specific alteration, chromatin integrity was largely restored in the cac1∆ rtt106∆ mutant in G1 ( Fig. 2A and B; compare mutant and wild type profiles in G2 and G1 panels). In conclusion, cells are able to correct severe chromatin alterations occurring during the process of RC-nucleosome assembly, and these changes are associated with a transient loss of plasmid negative supercoiling. Therefore, we used this plasmid topology assay to study the chromatin restoration process.
Chromatin restoration in RC-nucleosome deposition mutants is independent of cell cycle arrest. The shift in the distribution of topoisomers induced by defective histone supply in t::HHF2 and cac1∆ rtt106∆ cells was largely restored to wild-type levels in mitosis (Fig. 1B). To confirm that chromatin restoration occurred before the metaphase-anaphase transition, we repeated the plasmid supercoiling analysis in cac1∆ rtt106∆ cells expressing cdc20-3, a thermosensitive allele of the APC cofactor Cdc20 that causes a metaphase arrest at restrictive temperature 51 . In this case, G1-released S phase cells were washed and resuspended into fresh medium with α-factor for G1 resynchronization. The recovery of plasmid negative supercoiling occurred with similar kinetics with and without cell cycle-induced arrest (Figs. 3A and S3A), indicating that chromatin restoration of the 2µ plasmid occurs before anaphase. www.nature.com/scientificreports/ Chromatin assembly mutants transiently arrest in metaphase 7,14,47,52,53 . Therefore, we wondered if this arrest was required for the recovery of plasmid negative supercoiling. Most chromatin assembly mutants, including the double mutant cac1∆ rtt106∆, arrest in metaphase due to the activation of the DNA damage checkpoint (DDC) 7,14,52,53 . We observed that a triple mutant cac1∆ rtt106∆ mec1∆, defective in DDC activation, was proficient in the recovery of negative supercoiling (Fig. S3B). However, chromatin assembly defects can also lead to a metaphase arrest by activation of the spindle-assembly checkpoint (SAC), as it is the case of the t::HHF2 mutant 47 . These cells, interestingly, do not activate the DDC despite the accumulation of DNA damage (Fig. S3C) 47 . Therefore, t::HHF2 mad2∆, lacking a functional SAC, is an optimal mutant to address if cell cycle arrest is required for chromatin restoration. The elimination of the metaphase arrest in a SAC-deficient mad2∆ background did not alter the kinetics of plasmid supercoiling of the t::HHF2 mutant (Fig. 3B). The recovery of negative supercoiling was slightly worse in t::HHF2 mad2∆ than in t::HHF2 cells; however, this difference is likely associated with the accumulation of dead cells in mitosis and G1 by chromosome mis-segregation 47 . Therefore, the post-replicative restoration of the chromatin assembly defects is independent of cell cycle arrest.
Restoration of cac1∆ rtt106∆-mediated chromatin assembly defects are facilitated by transcription. Transcription activity helps to correctly position nucleosomes 34 , and accordingly it is required for chromatin maturation 27,29,32 . Therefore, transcription provides a potential mechanism to restore post-replicatively a loss of chromatin integrity occurring during genome duplication. To address the relevance of transcription in the recovery of the cac1∆ rtt106∆-mediated chromatin assembly defects, we followed the distribution of plasmid topoisomers along the cell cycle in cells expressing a wild type or a thermosensitive allele of the largest subunit of RNAPII (rpb1-1) 54 . Since transcription was essential to exit from G1 (Fig. S4A), cells were shifted from permissive (26 °C) to restrictive temperature (37 °C) in the middle of S phase (peak of negative supercoiling loss; 30 min for all strains except for the triple mutant cac1∆ rtt106∆ rpb1-1 that required 60 min because of a slower G1 exit). After the shift, cells were maintained at restrictive temperature for 90 min. The absence of transcription post-replication did not affect the pattern of plasmid supercoiling during the cell cycle ( Fig. 4; compare rpb1-1 with wt). The loss of negative supercoiling in the triple mutant cac1∆ rtt106∆ rpb1-1 was less pronounced than in the double mutant cac1∆ rtt106∆ (Fig. 4; compare the shift in topoisomers from G1 to S phase in both strains), suggesting that the rpb1-1 allele slightly affected the accumulation of chromatin assembly defects at permissive temperature. Importantly, the absence of transcription strongly reduced the recovery of plasmid negative supercoiling in the cac1∆ rtt106∆ mutant (Fig. 4; compare cac1∆ rtt106∆ rpb1-1 and cac1∆ rtt106∆ strains at 60-90 min after the shift to restrictive temperature), even though a slight recovery was observed in the triple mutant at later times ( Fig. 4; compare 60 and 90 min after the shift in the cac1∆ rtt106∆ rpb1-1 mutant). Therefore, transcription helps to restore the loss of chromatin integrity associated with defective RC-nucleosome assembly.
FACT helps to restore cac1∆ rtt106∆-mediated chromatin assembly defects. Two chromatinremodeling pathways have been proposed to maintain nucleosome integrity during transcription. The first pathway depends on Asf1 and the HIR complex and plays a major role at the intergenic region by nucleosome exchange. FACT and Spt6 are the major effectors of the second pathway and are more-but not exclusively-dedicated to the reassembly of histones throughout the gene bodies 55 . First, we addressed the role of the HIR complex (formed by Hir1, Hi2 and Hir3 in S. cerevisiae), which has been involved both in chromatin maturation and restauration of cac1∆-associated chromatin defects 27,28 . The absence of the HIR complex in cells lacking its major subunit (Hir1) did not prevent the recovery of plasmid negative supercoiling in cac1∆ rtt106∆ cells (Fig. 5A), suggesting that it is not required for the recovery of chromatin integrity in this mutant.
To address the relevance of the second pathway in restoring replication-coupled chromatin assembly defects, we analyzed the effect on DNA topology of the thermosensitive allele spt16-G132D; this mutation affects the stability of Spt16 at restrictive temperatures 56 . Since the elimination of Spt16 at restrictive temperature causes transcription-associated chromatin assembly defects 56 , we performed the analysis in cells synchronized in G1 at permissive temperature (26 °C) and released at semi-permissive temperature (31 °C) until the following G1 phase, which required different times for each strain. The logic behind is to allow a complete restoration of the cac1∆ rtt106∆-induced chromatin defects. At this semi-permissive temperature, plasmid topology was hardly affected in the spt16-G132D mutant (Fig. 5B). Importantly, the loss of negative supercoiling induced by the absence of CAF1 and Rtt106 was not recovered in the triple mutant cac1∆ rtt106∆ spt16-G132D. Indeed, we observed a slight but reproducible loss of negative supercoiling in the triple mutant in G1, suggesting that the Spt16-G132D protein is partially defective in chromatin restoration even at permissive temperature. In contrast to spt16-G132D, a spt16-m allele that specifically affects the RC-histone deposition activity of FACT 17 , was able to restore the chromatin assembly defects induced by the absence of CAF1 and Rtt106 (Fig. 5C). Therefore, the activity of FACT facilitates chromatin restoration after defective RC-nucleosome assembly.
Discussion
The efficiency of the nucleosome deposition process during DNA replication eliminates chromatin characteristics, which are recovered through a maturation process that depends on DNA sequence composition, GRFs and chromatin remodeling factors, and transcription [25][26][27][28][29][30][31][32][33] . Several studies support that cells can restore chromatin alterations generated by mutations in nucleosome deposition factors in yeast (caf1∆, asf1∆ and rtt109) and Drosophila (Caf1-105 knockdown) 26,28,37 , although the mechanisms of restoration are poorly understood. In yeast, asf1∆ and rtt109∆ mutants delay histone deposition due to a lack of H3K56 acetylation that reduces histone delivery to CAF1 and Rtt106. Accordingly, chromatin is less affected in asf1∆ and rtt109∆ than in cac1∆ and rtt106∆ mutants 25 Here, we have shown by genome-wide MNase-seq that the severe loss of chromatin integrity induced during S phase by the absence of CAF1 and Rtt106 is largely restored post-replication, and that the process of chromatin restoration can be followed by analyzing the level of plasmid supercoiling along the cell cycle. This provides an alternative to the more expensive and time-consuming MNase-seq assay to screen for genetic requirements of the chromatin restoration process. Using this plasmid topology assay, we have shown that cells are able to restore even the severe chromatin defects induced by a strong reduction in the pool of available histones, and that chromatin restoration is facilitated by the action of transcription and the FACT complex. We have focused on the monomeric and not in the multimeric forms of the 2µ plasmid to study the connection between DNA topology and chromatin alterations, thus minimizing template-specific effects.
In any case, chromatin dynamics is influenced by the structural and functional particularities of the analyzed regions, and therefore a deeper characterization will require genome-wide approaches.
Our genome-wide analysis shows that most chromatin assembly defects generated in cac1∆ rtt106∆ cells during DNA replication become restored in G1. However, chromatin was more altered in G2 than in S phase, which is consistent with the severe genome-wide chromatin assembly defects remaining in histone-depleted cells arrested in G2/M 57 . These results suggest that chromatin restoration is slower than chromatin maturation, which is completed in 5-20 min after replication fork passage 25,27,28 , yet highly efficient even under conditions that strongly disrupt the chromatin landscape as those induced in cac1∆ rtt106∆ and t::HHF2 mutants. The Plasmid topoisomer distribution of the 2µ plasmid in wild type, rpb1-1, cac1∆ rtt106∆ and cac1∆ rtt106 rpb1-1 cells synchronized in G1 and released into fresh medium until mid-S phase (60 min for cac1∆ rtt106 rpb1-1 and 30 min for the rest) at 26 °C, and then shifted to and incubated with pre-heated fresh medium at 37 °C for the indicated times. Samples were run into different gels due to space limitations, and processed in parallel. Cell cycle progression and topoisomer profiles are shown. r and SC(−) indicate relaxed and negative supercoiling, respectively. Cropped images show only relaxed and negatively supercoiled topoisomers. Original gels are presented in Fig. S4B www.nature.com/scientificreports/ accumulation of chromatin alterations and their "repair" during chromatin restoration can also be detected by following the distribution of plasmid topoisomers in RC-nucleosome assembly mutants during the cell cycle. These mutants display a strong and transient loss of plasmid negative supercoiling during the cell cycle as a consequence of RC-chromatin disruption. In contrast, the population of plasmid topoisomers does not change in the wild type, which reflects the speed and efficiency of the chromatin maturation process. Therefore, this assay allows to specifically follow chromatin restoration. www.nature.com/scientificreports/ A common feature of chromatin assembly mutants is a metaphase arrest triggered by the activation of the DDC and/or SAC 7,14,47,52,53 . We show that this arrest is not required for chromatin restauration, although it likely provides time to coordinate this process with the repair of the DNA lesions that will activate the checkpoints.
Transcription activity is a major determinant of nucleosome position 34 , and it is required for chromatin maturation after DNA replication 27,29,32 . We show that transcription facilitates chromatin restauration. The genomewide chromatin analysis showed that the loss of nucleosome phasing at G2 in the cac1∆ rtt106∆ mutant mainly affected the gene bodies, as previously observed in histone-depleted cells in G2/M 57 . The analysis of nascent chromatin at early time points in a cac1∆ mutant showed nucleosome defects both at promoters (gain and loss of occupancy at the NFR and the flanking nucleosomes, respectively) and gene bodies (loss of phasing) 28 . Therefore, the promoter architecture of the cac1∆ rtt106∆ mutant is likely first reconstructed to prime active transcription and restore chromatin in the gene body as proposed for chromatin maturation in yeast, where the rapid binding of GRFs at promoters generate molecular landmarks that fix the positions of flanking nucleosomes 25,30 . This mechanism is also likely necessary during the restoration of a severely altered chromatin in order to provide a rule for the transcription machinery to properly reposition nucleosomes during elongation. However, it is unlikely that this process resembles chromatin maturation in the initial steps. During chromatin maturation, restructured promoters with bound GRFs are still refractory to RNAPII recruitment 31 , which explains why transcription is buffered for a while after replication 31,58,59 . In contrast, defective chromatin assembly in asf1∆ and cac1∆ rtt106∆ cells causes a transient accumulation of aberrant coding and non-coding transcripts behind the replication forks 37 . This effect is more pronounced in the cac1∆ rtt106∆ mutant, likely because of its higher nucleosome deposition defects and the role of Rtt106 in preventing aberrant transcription 49,50 . We speculate that transcription from spurious initiation sites may slow the process of chromatin restoration because of the repositioning of nucleosomes without a correctly defined reference.
The requirement of transcription elongation for chromatin maturation supports a role for chromatin remodeling complexes traveling with RNAPII like CHD1 and ISW1b. In agreement with this possibility, nascent chromatin-associated alterations persist in the absence of these chromatin remodelers 25,27 . Less clear is the relationship with transcription for the HIR complex 27 , a chromatin remodeler that participates in replication-independent histone turnover, preferentially at intergenic regions 55,60,61 . The study of bulk nucleosome organization has also pointed to a role for the HIR complex in the restoration of cac1∆-induced nucleosome assembly defects 28 . Our plasmid topology assay did not reveal any role for the HIR complex in chromatin restoration in the cac1∆ rtt106∆ mutant. Although the difference might be plasmid-specific, it cannot be excluded that the loss of nucleosome phasing in cac1∆ hir1∆ cells reflects an additive effect of the absence of both complexes, as the hir1∆ mutant by itself displayed a reduction in the amplitude of the nucleosomal oscillation on coding regions 28 .
FACT is a nucleosome remodeler complex with a critical role in nucleosome repositioning during transcription elongation. FACT travels with the RNAPII promoting the redeposition behind RNAPII of the original nucleosomes evicted during elongation through a stepwise mechanism of nucleosome disassembly-assembly that helps to maintain the epigenetic identity 55,[62][63][64][65][66] . We have observed that the spt16-G132D mutant has no defects in the distribution of plasmid topoisomers but prevents the recovery of the negative supercoiling level lost during DNA replication in a cac1∆ rtt106∆ mutant at semi-permissive temperature (31 °C). Therefore, the amount of Spt16 at this temperature seems to be sufficient to avoid a loss of nucleosomes during transcription but not to restore defective chromatin assembly. This suggests that the mechanism by which FACT restores chromatin is either different or requires more Spt16 than the mechanism by which FACT redeposits nucleosomes during transcriptional elongation. FACT is targeted to chromatin by recognizing the surface of disrupted nucleosomes generated mainly-but not exclusively -by transcription 56,67,68 . This observation, together with the ability of FACT to assemble nucleosomes led to Formosa and Winston to propose a role for FACT in the "repair" of disrupted nucleosomes 69 . It is likely that the dependency on transcription of the chromatin restoration process in cac1∆ rtt106∆ cells reflects the need to disrupt nucleosomes to target FACT, which would be required at higher levels than in the wild type strain to additionally cope with displaced nucleosomes. An alternative but not exclusive possibility for the higher demand of Spt16 during chromatin restoration is that not only the position but also the integrity of some nucleosomes become affected in cac1∆ rtt106∆ cells, targeting FACT in a transcriptionindependent manner.
In summary, cells are able to largely restore a severe loss of chromatin integrity induced under conditions of defective nucleosome assembly, providing a mechanism to buffer its impact on cell fitness. In addition, using plasmid topology as an easy and specific assay to study chromatin restoration, we have shown that this process requires the action of both transcription and FACT. This assay may help to uncover additional factors involved in chromatin restoration, as a previous step to a more detailed genome-wide characterization.
Methods
Yeast strains and growth conditions. Yeast strains used in this study are listed in Table S1. Cells were grown at 30 °C-unless otherwise indicated-in YPAD (experiments including rpb1-1, spt16 and Gp::CDC6 strains) or supplemented minimal medium (SMM) (rest). For metaphase synchronization, cells were treated with 15 µg/ml nocodazole for 1 h. For G1 synchronization, cells were grown to mid-log-phase and α factor was added twice at 90 min intervals at 0.5 μg/ml, except for t::HHF2 strains (treated with 1 μg/ml) and rpb1-1 strains (treated twice at 150 min intervals). Cells were then washed three times and released into fresh medium with 50 μg/ml pronase. For G1 resynchronization, cells released into S phase were washed and resuspended in fresh medium with α-factor at 1 μg/ml (t::HHF2 strains) or 0.5 μg/ml (rest of strains) until G1. To induce nucleosome depletion, t::HHF2 cells growing in the presence of 5 µg/ml doxycycline were shifted to 0.25 μg/ml during G1 synchronization and release. Cdc6 depletion was performed as previously described 70 www.nature.com/scientificreports/ zole for 2 h, shifted to 2% glucose-containing medium with DMSO and nocodazole for 2 additional hours, synchronized in G1 in 2% glucose-containing medium with α-factor for 2 h, and released into fresh 2% glucosecontaining medium with 50 µg/ml pronase for 1 h.
Flow cytometry. DNA content analysis was performed by flow cytometry as reported previously 36 . Cells were fixed with 70% ethanol, washed with phosphate-buffered saline (PBS 1X), incubated with 1 mg of RNaseA/ ml PBS, and stained with 5 µg/ml propidium iodide. Samples were sonicated to separate single cells and analysed in a FACSCalibur flow cytometer.
Plasmid supercoiling analysis. The distribution of topoisomers of the 2µ plasmid was performed as previously described 36 . Briefly, total DNA was extracted using a zymolyase-SDS standard protocol and run into 0.8% TPE 1 × agarose gels containing 4 μg/ml chloroquine for 36 h at 1.6 V/cm with recircularization. Negatively supercoiled topoisomers are resolved at this chloroquine concentration. Gels were blotted onto Hybond™-XL membranes and hybridized with a 32 P-labeled FLP1 fragment amplified by PCR from genomic DNA with oligos 5′-tgattacacataacggaaca-3′ and 5′-ttcagcactaccctttagc-3′. Signals were acquired in a Fuji FLA5100 and quantified with the ImageGauge analysis program. The total DNA signal (area under the curve) of the raw topoisomer profiles was equalized to eliminate DNA loading differences.
Chromatin analysis by MNaseI digestion and indirect-end labeling. Chromatin analyses by MNa-
seI and indirect-end labeling were performed as previously described 57 MNase digestions used for indirect end labelling were incubated with EcoRI, resolved in 1.5% agarose gels, blotted onto a HybondTM-XL membrane and probed with a 32 P-labeled specific PCR fragment located close to one of the EcoRI sites (oligos 5′-ataccaattcctcttcctag-3′ and 5′-tccaaatatacaagtggatc-3′). Signals were acquired in a Fuji FLA5100 with the Image Gauge analysis program.
Chromatin analysis by MNase-seq. Chromatin analyses by MNase-seq were performed as previously described 57 . Briefly, MNaseI-digested DNA samples from two (G1 and G2) or one (S phase) biological replicates for each yeast strain were obtained as previously indicated for indirect-end labelling. MNase digested samples enriched in mononucleosomes were loaded in a 1% agarose gel, and the DNA corresponding to mononucleosomes was purified with a DNA purification kit (Bioline; BIO-52059). The DNA size and quality was confirmed by an electropherogram analysis (2100 Bioanalyzer™). Library construction and sequencing was performed at Genomics Core Facility of CABIMER. DNA libraries were prepared from 10 ng mononucleosome DNA using the TruSeq Chip Library Preparation kit (Illumina), and the size distribution and molarity of each library were analyzed with the Agilent™ DNA High Sensitivity Kit (Agilent 2100 Bioanalyzer). DNA libraries were sequenced on the NextSeq 500 Sequencing System (Illumina), and raw data were processed for basecalling, filtering and trimming to generate the FASTQ files using the BaseSpace Onsite v3.22.91.158 Software de Illumina. Sequence reads were mapped to S. cerevisiae genome sacCer3 by BowTie2 71 , and potential PCR duplicates were removed by SAM Tools on the Galaxy platform (usegalaxy.org) 72 . The peak-calling algorithm Dpos function (DANPOS 2.2.0) 48,73 was used for nucleosome occupancy maps and comparative analyses using default parameters. Average nucleosome occupancy patterns flanking transcription start sites (TSS) from one (Fig. S2A) or two (Fig. 2) biological replicates were plotted in average density maps using Profiles function (DANPOS 2.2.0) 48,73 .
Genome-wide data. Nucleosome profiles along the genome were visualized using the Integrative Genome Viewer (IGV) 74 .
Western blot. Yeast protein extracts were prepared using the TCA protocol 75 and resolved on a 8% SDS-PAGE. Rad53 was detected by standard western blot analysis with the rabbit polyclonal antibody JDI48 76 .
Data availability
The data that supports the findings of this study are available from the corresponding author upon reasonable request. Unique biological materials used in this study are available from the corresponding author. Raw data from MNase-seq have been deposited at the MIAME-compliant Gene Expression Omnibus (GEO) database at the National Center for Biotechnology Information (http:// www. ncbi. nlm. nih. gov/ geo/), and are accessible through the accession number GSE228861. | 6,371.4 | 2023-07-14T00:00:00.000 | [
"Biology"
] |
Topological dynamics and current-induced motion in a skyrmion lattice
We study the Thiele equation for current-induced motion in a skyrmion lattice through two soluble models of the pinning potential. Comprised by a Magnus term, a dissipative term and a pinning force, Thiele’s equation resembles Newton’s law but in virtue of the topological character to the first, it differs significantly from Newtonian mechanics and because the Magnus force is dominant, unlike its mechanical counterpart—the Coriolis force—skyrmion trajectories do not necessarily have mechanical counterparts. This is important if we are to understand skyrmion dynamics and tap into its potential for data-storage technology. We identify a pinning threshold velocity for the one-dimensional pinning potential and for a two-dimensional attractive potential we find a pinning point and the skyrmion trajectories toward that point are spirals whose frequency (compare Kepler’s second law) and amplitude-decay depend only on the Gilbert constant and potential at the pinning point. Other scenarios, e.g. other choices of initial spin velocity, a repulsive potential, etc are also investigated.
Introduction
The experimental discovery in 2009 of a hexagonal skyrmion lattice in MnSi under an external vertical magnetic field generated a convergence of efforts to understand better the interplay between the ferromagnetic exchange and Dzyaloshinskii-Moriya couplings in conjunction with crystalline field interactions in B20 compounds, i.e. magnetic materials lacking inversion symmetry (or chiral magnets) [1,2]. A skyrmion is a planar topological spin texture whose spins are distributed in a circularly symmetric and continuous manner with the spin at the center pointing downward while all spins at the edge are pointing upward. Of particular interest for its application to information-storage technology, specifically the racetrack memory [3], is the effect of current on the magnetic texture since a relatively small current density is able to induce skyrmion motion, thus fueling hopes that ultra-low current densities might be feasible in the manipulation of magnetic structures [4,5]. But this hope is not without fears since the mechanisms responsible for pinning and current-induced skyrmion motion are presently not well understood [6]. Moreover, experimentally, only very slow translation motion of skyrmions has been observed.
The standard tool of magnetization dynamics is the Landau-Lifshitz-Gilbert (LLG) equation the gyromagnetic ratio, v s the spin velocity parallel to the spin current, the free energy density, α the Gilbert damping constant, and β the coupling between the spinpolarized current and local magnetization due to nonadiabatic effects [7,8]. An immediate consequence of equation (1) is that the magnitude of the magnetization M 2 is conserved in time. The LLG equation has found application in a variety of systems such as ferromagnets, vortex filaments, and moving space curves and structures such as spin waves and solitons, to name a few [9]. As a time-dependent nonlinear equation, the LLG equation is arguable the most difficult equation to solve in theoretical physics. It is natural then to seek simplified versions of it. Since our interest here is on current-induced forces in a skyrmion lattice, we consider Thiele's simplification of it and treat the pinning mechanism phenomenologically [10]. Thiele projected the LLG equation onto the relevant translational modes [11] and in this way obtained an equation that could be regarded as a dynamical force equation but derived from a torque equation. To obtain it, we first assume a steady-state rigid texture v d standing for the skyrmion drift velocity. Then we cross multiply equation (1) by M, followed by a scalar multiplication of the result by M M i 1 ¶ and finally we integrate over the skyrmion area: is the dimensionless gryro-coupling vector and the dissipative dyadic. The gyro-term can be traced back to the Berry phase and pushes a moving a skyrmion perpendicularly to its direction of motion and is also referred to as the Magnus force [5,11,12]. The Magnus force is the counterpart of the Coriolis force in dynamics. The latter is a small correction to the dynamical equations [13], whereas the former, as we will see, dominates the dynamics of our skyrmion system. The Magnus force makes a spinning ball swerve one way as it passes the air; the Coriolis force is a fictitious force due to motion in moving noninertial frame. If we view equation (1) as an equation in the reference frame of the current, it seems more fitting to compare the first term with the Coriolis force. The dissipation term, which sums up the skyrmion's tendency toward a region of lower energy, originates from Gilbert damping. At the right-hand side we inserted a term due to a potential V, which models the pinning potential. Internal details of the skyrmion are ignored. Since the skyrmion is assumed to be perfectly rigid, it is not possible to deduce the pinning forces due to cancellation of forces for such a structure. Pinning is important not only in magnetics but also in superconductivity [14], soliton theory [15] and meteorology. For a skyrmion of winding number Q=−1 [1], g G n n 4 , p º = - n being the normal to the thin film and 5.577 It is obvious that the Thiele equations are already a vast simplification over the original LLG equation. Nevertheless it still is nonlinear, albeit one involving only first-order derivatives. Unlike Newton's equations of motion, Thiele's equations are not time-reversal invariant. Moreover, the quantities g and ij are of topological origin in contrast with the dynamical parameters entering into Newton's equations. It is important then to gain familiarity with the Thiele equations if we are to understand current-induced motion in chiral magnets [16].
What is notable about this system is the topological character of the Magnus and dissipative parameters, a significant departure from mechanical systems. In the early 90s ideas about a topological quantum mechanics were in vogue [17]; we might now speak of a topological dynamics for the present system. In this paper, we present two models for which exact solutions of the Thiele equations can be derived. We find that these results are in excellent agreement with numerical results. Our findings allow us to identify key features of the dynamics. Insights from Newtonian mechanics do not necessarily translate into analogous situations for the Thiele case (for instance Kepler's second law does not hold in one model; in the other model Coriolis deflection occurs without forward motion).
The current-induced dynamics of skyrmions is a subject of much relevance at this time. Some recent works bearing on the topic of particle-like motion of skyrmions moving over pinning sites may overlap with our work and we briefly enumerate some of them. The pinning mechanism was studied by Liu and Li [18] while pinning and creep in chiral magnets was elucidated by Lin et al [19]. The relation between skyrmion and superconducting vortices was explored by Reichhardt et al [20] while Lin et al [21] treated the effects of a current pulse in the creation of a skyrmion. Reichhardt et al explored various aspects of skyrmion dynamics: effects of random quenched disorder [22], ac driven skyrmions over asymmetric quasi-one-dimensional substrates [23] and two-dimensional periodic substrates [24]. The effects of a small hole in a magnetic layer were explained by Muller and Rosch [25]. The creation of skyrmions with polarized current was investigated by Lin et al [26].
One-dimensional potential
We begin with a one-dimensional sinusoidal form for V:V=−V 0 cos(2πx/λ) and assume λ much larger than skyrmion size [27]. Let us also assume constant spin current v s in the x-direction so v 0.
Since the solutions are translationally invariant, we take, for simplicity, the initial position to be the origin.
Making use of the formula A 1 2 t a n When r<1, we must make the replacements sin→i sinh, cos→cosh and tan→i tanh. Since −1arctanh ξ+1, x has a limit point when r<1. This limiting point does not appear when r>1.
There is another way to look at the case r=1. in the xdirection (this corresponds to r<1) and V 0 =10, α=0.1, β=α/2. We use these latter parameters for figures 1-3. The r=1 case for the parameters given corresponds to v s =0.690 37. The starting point is always the origin. Equation (4) and numerical integration yield the same graphs. The first term of equation (2), which is the Magnus term, shows that the motion along the y-direction is due to the gyro-term and is large as comparison of the X and Y displacements on figure 1(b) indicates. Figure 1(b) shows that the motion along the x-direction approaches a fixed or pinning point as the velocity X approaches zero asymptotically; whereas there continues to be a drift upward. We can think of the first term on the right-hand side of equation (3a) as the force component of the Magnus force opposed by the second term on the right-hand side, which represents a dissipative term, being proportional to . a At the pinning point, these forces balance each other exactly. For equation (3b) the first term is the dissipative force, being proportional g , whereas the second term is the force component of the Magnus force in the ydirection. Close to the pinning point, this latter is much larger than the first. Figure 2(a), for spin velocity v s =0.7 to the right, corresponds to r>1. There is no pinning in this case. Both the exact and numerical solutions agree as before. The deflection upward by the Magnus force is large as figure 2(b) shows. It is due to the term linear in time in the equation for Y(t) in equation (4); the first term, UX/V, is responsible for the small periodic downward dips in the right-hand plot for Y(t). Each of the almost horizontal steps in figure 2(a) corresponds to a crossing into the potential barrier (but there is no tunneling here).
Two-dimensional attractive potential
Consider next the attractive two-dimensional potential This potential has cusps along the x-and y-axes. As above we assume a spin current only in the x-direction.
s g n . 6 x y x x y Let X x Y y , , / / l l = = measure time with the same magnitude as , l and define Integrating we have This holds for a given quadrant where signs of X and Y stay constant 'during' integration. Again from the first of equation (7) we have adding and, staying in a fixed quadrant, we obtain X Y X Y sgn sgn 2 e . (8) and (9) are the parametric equations of the trajectory for a given quadrant. They are valid for any quadrant and for any choice of initial velocity in the x-direction and initial position. For small times the trajectory is clearly straight. When , which is proportional to α, is small we can also expect straight trajectories. In regions where this is large, we might expect curved trajectories.
Far from the origin the trajectory is straight. To see this, multiply the first equation of equation (7) by Y and the second by X and subtract. Far away from the origin we obtain tan , (8) and (9) compares well with the Mathematica graph. Although the spin velocity v s is to the right, the direction of motion in the beginning (i.e., third) and following (second) quadrants are dictated by the gradient factor in equation (8), .
Note that this is a ratio of quantities (of topological origin for the case of g). After the motion has entered into the first quadrant, time has now become large (∼10 5 , see figure 3(b)) and the trajectory veers off only to assume a straight path outward to infinity. The effect of the cusps is evident and occurs only at the coordinate axes. These cusps might be suitable in modeling line defects. For comparison we have shown in figure 3(a) the trajectory when the Magnus force is set to zero (dotted-dashed curve): the motion is entirely caused by current flow, dissipation and the attractive force. The role of the Magnus force is clearly dominant and defines the trajectory around the origin. In other words it is responsible for circling the potential center. Figure 3(c) shows that trajectory from the starting point S: (−6, −1) for 20 = and v s =2. As in figure 3(a), the exact and numerical results agree well with each other. The cusps are again evident. What appears striking here is the turn-around trajectory about the center of the potential at the origin. In fact a close-up of the trajectory around the origin, as shown in the inset of figure 3(d), indicates that the motion is very much like the first two parts of figure 3(a): they are straight-line segments whose gradients are given by the ratio , in which g is of topological origin. Because the turn-around occurs much closer to the source of the potential than in figure 3(a) we see a faster reversal of the motion. At the point T in figure 3(c), the drift is purely horizontal, i.e. the y-velocity vanishes. From the second equation of equation (7) we infer that this is where the dissipative (first term) force component is balanced by the Magnus (second) term. Figure 4 shows other scenarios which indicate that whenever particles approach the origin they are bound to undergo the phenomenon already seen in figure 3: straight-line trajectories whose gradients are given by the ratio of quantities g and , g being of topological origin. The dashed curve in figure 4(a) represents motion with Magnus force set to zero: at S it is almost perpendicular to the actual trajectory, clearly displaying the distinctive characteristic of the Magnus force. The red dotted-dashed trajectory with starting point at S′(1,−1.8) is given to show that the position of the starting point (whether to the left or right of the origin) does not hold special significance in the dynamics. A trajectory that goes around the origin is seen in this case but if S′ were further away from the origin, the particle would only go to the right without circling the origin. In many cases we chose a starting point to the left of the origin simply to take advantage of the fact that this would force the particle to come in close proximity of the potential center.
There is no special role played by the direction of the initial spin velocity. When we have vertical spin velocity v sy instead of horizontal velocity the same formulas (8) and (9) apply except for the replacement: , Figure 4(c) shows the trajectory corresponding to figure 4(b) but in which v sy =0.01 while horizontal spin velocity is zero. We see that the trajectory is just a rotation of the trajectory of figure 4(b) by 90°.
Two-dimensional repulsive potential
The formulas (8) and (9) apply to the repulsive case as well. We only have to reverse the sign of U 0 . As examples we take figures 4(b) and (c), change the sign of U 0 , and display the results in figure 5. These graphs show the effect of the repulsive force since in the region where the potential is effective, i.e. near the origin, there is a tendency to avoid the origin as figure 5 shows whereas we see a certain tendency toward the origin in figure 4. The dashed trajectories in figure 5 correspond to motion when the Magnus force is set to zero. In both cases we see pinning, i.e. motion comes to a stop at a certain point (the origin in figure 5(a) and the y-axis in figure 5(b)). We do not see this pinning phenomenon in the attractive case. However, it should be noted that a situation of zero Magnus force does not occur for physical skyrmions.
Pinning
The LLG equation for the attractive potential (5) does not have a pinning point even though the origin is clearly a local minimum. This is because of the cusps. To see pinning we take two identical attractive potentials V x y , , ( ) one centered at the origin as in equation (5), and another at (3.5, 0). We choose 2, = v sx =0.03 and consider the region 3.5>x>0, y<0, where the total potential is perfectly smooth. The Thiele equations take the form At a pinning point (X 0 , Y 0 ) both velocity components vanish: X 0 ¢ =Y 0. 0 ¢ = We can solve algebraically these equations with left-hand sides of equation (10) can be expanded to first order: These first-order equations can be easily solved exactly. The solutions have the time dependence factor e , , This describes a spiral with frequency VW 2 and amplitude decays in time through the factor e .
The first result is a clear departure from Kepler's (T r 2 3 µ ) second law. From the definition of b we infer that the decay only depends on the Gilbert constant α, but not on β. Moreover the frequency depends only on the strength of V at the pinning point. The result is shown in figure 6 for 2, = v sx =0.03. The left-hand graph is obtained from equation (10). The right-hand graph with starting point at (0.1, 0) is obtained by numerical integration. Equations (10) are applicable only in the smooth region 3.5>x>0, y<0. Unfortunately application of the above procedure to a similar pair of repulsive potentials or a pair of potentials of opposite strengths did not yield a pinning point. In these cases a point of zero velocity might exist but it was not a stable point. Moreover since we have limited ourselves to soluble models we have not looked into multiple attractive potentials or similar systems.
Conclusions
In summary we studied the Thiele equation for current-induced motion in a skyrmion lattice through two soluble models of the pinning potential. Thiele's equation is composed of a Magnus force, responsible for transverse motion relative to the current velocity, a dissipation force along the current velocity and the pinning force. The first has topological origin whereas the third is imposed externally. In the first, one-dimensional model the Magnus force was found to dominate the dynamics and even transverse motion without corresponding skyrmion motion in the spin direction was possible. We saw a threshold velocity below which motion in the current direction is not allowed and which can be interpreted in terms of balance between the Magnus and the pinning forces. In the second two-dimensional case we saw the occurrence of straight trajectories in which the interplay of the Magnus and dissipative forces and, hence, the topological character of g are evident. Because of the peculiarities of the model, pinning onto a point was not possible; however with two attractive potentials separated from each other, a pinning point could be found. The trajectory close to the pinning point is a spiral whose frequency and amplitude decay depend only on the Gilbert constant and the strength of the potential at the pinning point. Kepler's second law did not hold in this system. We did not inquire into the effect of mass which can be a natural point for departure in the future [28]. We also examined cases in which (a) the initial velocity was vertical instead of horizontal as above, (b) or to the right of the potential center instead of to the left and (c) where the potential was repulsive. | 4,496.8 | 2015-09-02T00:00:00.000 | [
"Physics"
] |
Machine Learning-Based Aesthetic Music Education Informatics Assessment Method
Purpose . An instrumental mechanism for the identi fi cation, theoretical substantiation, and introduction of didactic conditions is a dynamic functional-structural model that performs orientation, managerial, formative, and analytical functions in the optimal organization of students ’ independent educational activity with the use of information and communication technologies. Method . During the research, a comprehensive target program of research and experimental work was developed, which was based on the step-by-step implementation of the didactic model, and its e ff ectiveness was proved. Findings . The integration of traditional and electronic learning technologies within the framework of research and experimental work ensured systematic, planned, optimized organization, and enhanced control and diagnostic procedures of students ’ autonomous skills due to the requirements of manufacturability, adaptability, and control laid down in the tried-and-tested models of blended learning. Following the completion of the experimental work, quantitative, qualitative, and statistical analyses revealed positive dynamics in the levels of organization of independent educational activity of students of technological and pedagogical specialties with the use of information and communication technologies in motivational, content, operational, and productive criteria in the experimental group. Implications for Research and Practice . According to the results of experimental work in the experimental groups, statistically signi fi cant positive dynamics were recorded: 10% more students became with a high level of organization of independent educational activity, 13.3% ones were with a su ffi cient level, thus a number of students with indicators of critical and insu ffi cient levels decreased on 23.3%.
Introduction
Music is one of the most complicated and difficult topics for people to understand. Music can evoke different emotions and memories. Music is important in many parts of life, such as dancing, parties, and for teaching language or math skills. However, there were no systematic evaluation methods for music aesthetics education information which was based on machine learning before this study. This study systematically evaluated music aesthetics education information which was based on machine learning with four criteria: content validity, comprehensiveness level of relevant content coverage, information accuracy, and precision rate of related data sources. Based on those criteria, this study conducted evaluation experiments with the evaluation plan to systematically evaluate music aesthetics education information.
As artificial intelligence technology has advanced, music instruction has incorporated a variety of intelligent features. Software, such as suggestions for musical practice, intelligent music level classification, the intelligent software, and interact with an electronic violin and appreciate how software and hardware, in the opinion of experts intelligent mistake correction for violin keys, intelligent span, and significantly increase students' motivation for studying and effectiveness.
The term "music appreciation" is typically used to describe the process of developing an individual's ability to appreciate music and have an understanding of different musical styles. Many people recommend that people should start listening to music at a younger age. Generally, children typically start listening to classical music and later develop their interests in other genres such as jazz, country, or metal. This article will evaluate the aesthetic evaluation method for machine learning in order to help people understand their taste in music.
Literature Review
The problem of organizing independent educational activity and the leading forms of its organization-independent and research work, as well as a variety of consultations-has long history, and it is considered in numerous scientific publications by leading scientists of the past and present [1][2][3].
The organization of independent learning activities is positioned by researchers as a managed, unmanaged (spontaneously organized), and self-organized process, which has its specificity depending on the direction and conditions of study.
Most studies of this system educational phenomenon are based on the fact that students' independent learning activity can be at different levels (reproductive, productive, or creative), occur in the classroom and extra-curricular time, but necessarily with the indirect guidance of this process by the teacher [4][5][6].
This circumstance directs the efforts of the researchers to study the features of planning, norm-time costs, the logic of introducing organizational forms and methods of activation of independent learning activities, and development of didactic tools and criteria for evaluating its effectiveness, taking into account the specific training of students of technological and pedagogical specialties [7].
At the same time, a nominal increase in students' independent learning activity without introducing changes to the educational process's structure and content does not contribute to its optimal organization. Researchers see new broad perspectives in the introduction of cutting-edge information and network technologies, computer technology, and information transmission and exchange methods [2,8].
Others have confirmed the utility of using information and communication technologies (ICT) to organize independent educational activities for students of technological and pedagogical specialties [9].
However, the emergence of new computer-centric technology, electronic communications, and expansion of their hardware and software base prompts researchers to find fundamentally different ways of intensifying and optimizing students' independent learning activities.
The analysis of the state of the organization problem of independent educational activity of students of technological and pedagogical specialties, as well as the level of application of ICT in the educational process, allows for the identification of the following fundamental contradictions: (i) In a higher education institution, there is a high priority level of holistic formation of socially active, independent, and creative self-realized specialists, as well as the actual role and importance of students' independent educational activity [10] (ii) The implementation of a new structure, the use of cutting-edge ICT tools in the management and self-management of students' technological and pedagogical specialties, and the elimination of ste-reotypes that stymie innovative processes in the higher education system [11,12] (iii) Accumulated in science and practice an arsenal of didactic findings, forms, methods, and types of students' independent learning activity and the degree of substantiation and experimental verification of didactic conditions of its organization with the use of modern ICT [13] 3. Methods The implementation of the stated purpose and solution of the set tasks were carried out through the application of the following research methods: theoretical: retrospective, comparative, system analysis, generalization, classification, extrapolation of theoretical and research data, modeling to determine the essence of the key concepts of the study, substantiation of didactic conditions, design of an independent educational activity model for students of technological and pedagogical specialties using ICT; empirical: included and systematic observation, self-observation, peer review, self-assessment, conversations, interviews, testing and questioning of teachers and students, fixing, scaling, ranking, posing problematic questions, diagnostic control papers, pedagogical experiment for checking the effectiveness of the developed means of organizing students' independent educational activity; methods of mathematical statistics [14,15].
3.1. Data. The experimental work was carried out in three stages during 2014-2017 academic years on the basis of the academy named after Jan Długosz in Czestochowa. 210 students of technological and pedagogical specialties and 36 teachers were involved in the experiment. The participants' consent in the pedagogical experiment is obtained. Human rights were not violated during the experiment.
The observational experiment's goal was to investigate and analyze the state of the problem of using ICT in the organization of independent educational activities for students of technological and pedagogical specialties.
During the statement experiment (2014-2017) we tried (i) to discover the goals and content of students' educational activities in technological and pedagogical specialties through the study and development of leading regulations and curricula (ii) to identify the attitude of subjects of educational activity to the introduction of ICT in the system of professional education of students of technological and pedagogical specialties based on the results of surveys, testing, expert assessment and self-assessment, and analysis of documentation (iii) to determine aspects and features, the level of organization of independent educational activities by means of ICT on the basis of questionnaires, testing, diagnostic tests, scaling and ranking, and observations of the organization of their educational activities 2 Wireless Communications and Mobile Computing (iv) to single out difficulties and contradictions as a result of the analysis and generalization of studying of a condition of a problem of application of ICT in the organization of students' independent educational activities. First and foremost, we discovered the student's position on the organization of independent learning activities, as well as the methods for doing so, through questionnaires and interviews. For this purpose, the content of two questionnaires was developed: Questionnaire 1 "Attitudes to independent learning activities" and Questionnaire 2 "Features of the organization of independent learning activities of students" and task-situations (the appendix) The survey revealed difficulties in the understanding by the majority of students of the essence of independent educational activity in its organizational and content-procedural aspects. In particular, 54.2% of 1-2-year students and 45.7% of 3-4-year students experience constant stress during their educational activities. Among the factors that cause this condition, in the context of our study, we focus on the following: (1) little time is devoted to the proper consolidation of knowledge (47.6% of all respondents); (2) inconsistency of the proposed educational tasks with the level of knowledge of students (76.1% of 1-2-year students and 61.7% of 3-4-year students); (3) unsystematic organization of extracurricular independent work (85.7% of all respondents); (4) too much material for self-study (76.1% of 1-2-year students and 38% of 3-4-year students); (5) uniformity and routine of tasks for extracurricular activities (83.3% of students); and (6) the formal nature of counseling by teachers (47.6%).
Among the respondents, only 21.9% of 1-2-year students and 28.7% of 3-4-year students consider it necessary to have a formed and conscious idea of the essence and content of independent educational activity as a guarantee of their future competitiveness. It is assumed that 60% and 66.8% of 1-2-year and 3-4-year students have it, respectively. However, for 71.4% of all respondents, the independent educational activity is a learning outside the educational institution, and the rest (28.6%) identify it with independent work.
The study showed that the vast majority of students do not have established internal motives in the organization of independent educational activity: 83.4% perform only those tasks that can affect the final grade in the discipline; 11.9% are those who are interested in them, and only 4.7% perform all types of work in full.
Results and Discussions
Our investigation and analysis of the situation revealed that the existing system of independent educational activity of students of technological and pedagogical specialties does not fully meet the tendencies of higher pedagogical education modernization. The need to resolve the identified contradictions and the reasons for difficulty removal resulted in the expediency of the experimental study of the didactic conditions effectiveness for the use of ICT in organizing students' independent learning activities were discussed. According to the results of the analysis of theoretical provisions, as well as experience of organizing students' independent educational activity, a research and experimental program was created and tested through program-targeted activity-productive and evaluative-reflexive stages.
Developing a program for organizing students' independent learning activities using ICT, we proceeded from theoretical developments that investigate the essence of this process, results of scientific and experimental studies of teachers and psychologists, analysis of existing practice, own experience, and the notion that improving the efficiency of students' independent learning activities and introducing ICT tools are managed processes.
Due to the fact that the status of students' independent learning activity depends on many factors and conditions, the purpose of the developed program is to systematize and coordinate the actions of the subjects (teachers and students) in its organization, to create learning environment that allows to successfully implement didactic conditions application of ICT in the organization of students' independent learning activities in technological and pedagogical specialties. Awareness of the system nature of the students' independent learning activity has led us to define a research and experimental program for its organization with the use of ICT as a comprehensive and targeted one. The program is presented as a set of thematic lines, which focuses on the solution of urgent issues: organization of students' independent learning activities in technological and pedagogical specialties in the process of classroom independent work, extracurricular independent work, research work, and information training.
In addition, competency, information and technology approaches have ensured the selection of ICTs and relevant technological models that can be applied in the planning of self-directed learning activities ( Table 1).
The criteria for their selection were such as the following: (i) general pedantic giving logical, proficient direction, efficient, reliable, association of hypothesis with training, PC and "traditional" perception of instructive data, mindfulness, movement, and freedom of understudies in dominating information (ii) general mental-cordial discourse interface, nature of screen configuration (variety, contrast, clearness, size, speed of progress of data, and so on), considering understudies' age and individual qualities, presentation of method for inspiration of their free instructive action, academic, and PC support in independent learning (iii) methodical-regularity, algorithm, step-by-step and consistency in assimilation of educational information, feedback between lecturer and student, the only approach to the organization of independent educational activity in any educational environments 3 Wireless Communications and Mobile Computing To put to the test the assumption that organizing independent learning activities for students of technological and pedagogical specialties using ICT will be effective if done on a theoretically justified didactic model, which is a system of interrelated theoretical, methodological, organizational and managerial, content and activity, and control and diagnostic components (blocks), each of which performs a specific function (respectively, orientate, managerial, formative, and analytical) and provides creation in the educational process of certain didactic conditions according to which the experimental work was organized.
The most developed among students are such indicators of the motivational criterion, as "interest in using ICT" and "interest in communication with learning goals." The group of motives to the research work is the least developed in students. This is due to the fact that most of them do not consider such activities as useful and important prospects for themselves.
However, the activities of the comprehensive target program, including the involvement of students of experimental groups in finding relevant information on the network, in collaboration on specialized websites, in blogging, in conducting web forums and webinars, in carrying out tasks and training projects using ICT, tools, and built on the principle of gamification, in general, to work in the informational learning environment made it possible to achieve some qualitative changes in the structure of their educational and cognitive motivation ( Table 2 and illustrative diagrams in Figure 1).
The traditional system of organizing the learning activities prevailed in the control groups. ICT tools were used in the experimental group.
Quantitative analysis of the Table 2 shows that the development of the motivational aspect of organizing students' independent learning activities with the use of ICT was occurred unevenly, but most clearly for the students of the experimental group. In Figure 1, dynamics of indicators of the coefficient of organization of students' independent learning activity by the motivational criterion were shown.
Such outcomes were made possible by the inclusion of didactic conditions such as motivational conditioning in the educational process and equality of subjective positions of teachers and students in the management of students' independent learning activity through computer-oriented means. It should be noted that as subjects of independent learning activity students within the framework of scientific and experimental work had the opportunity to show the highest level of
Wireless Communications and Mobile Computing
independence and activity, to make free choice of ICT tools and types of educational tasks, to build their own educational trajectory by indirect guidance of the lecturer.
Qualitative shifts in experimental results are presented in Table 3.
Quantitative analysis of the data in Table 3 shows that not significant changes occurred in the motivational and needy area of students of technological and pedagogical specialties according to the results of the experiment in the control group. The dynamics of students in the experimental group, on the other hand, are more pronounced: the number of students with a high level score increased by 9.9%, the number of students with a sufficient level score increased by 11.7 percent, and the number of students with a critical and insufficient level of organization decreased by 21.6 percent. The validity and noncoincidence of the obtained results was confirmed by the 2% Pearson test with probability of error of 1%.
Diagnosis of organizational levels of independent learning activity of students of technological and pedagogical specialties using ICT by content criterion provided determination of the didactic and methodical knowledge available to students about the essence and content of independent learning activ-ity, its forms and methods, types of ICT, laws and principles of their use in the educational process, particularly during teaching school courses of work training, drawing, and technology; as well as ways of organizing and conducting independent learning activities using ICT in different settings.
To this purpose, we developed a diagnostic methodology that consisted of problematic questions to identify students' awareness of the types, forms, methods, and techniques of independent learning activities and their organization with the use of ICT. In the end, a diagnostic technique was developed that included: performing tasks-situations by students. The content of the tasks was formulated in the light of the principle of complex differentiation for the three topological groups of students, and covered indicators of formation the students' self-study, information and organizational skills, and the method of their implementation required students to use ICT widely.
The results obtained at each of the stages of experimental work are summarized in Table 4.
From the data of Table 4, we make sure that if there was uneven development of the system of skills of independent learning activity organization with the use of ICT in students Wireless Communications and Mobile Computing of both groups at the beginning of the experiment; then, the final slice showed, apart from significant dynamics in levels, the proportional development of all types of studied skills in experimental group students. This was made possible by organizing independent educational activities for students of technological and pedagogical specialties using ICT in accordance with the provisions of the dynamic systemstructural model. Take note of the changes in the degree of adaptability of students' independent educational activities organization in technological and pedagogical specialties ( Table 5).
As can be seen, there has been a significant increase in the use of IC technologies by students in experimental groups to solve educational tasks. Such results were made possible by the introduction of ICTs and their technological models into the educational process, which enabled integrating traditional and electronic tools in combined and blended learning systems. Comparing the results of the experiment's ascertaining and formative stages, we have determined that theoretically grounded and implemented in the educational process didactic conditions are sufficiently effective, and they serve tendencies for positive changes in the composition, 7 Wireless Communications and Mobile Computing content and structure of self-study activities of students of technological and pedagogical specialties, foster to optimality, efficiency, and controllability of its organization with the use of ICT.
Conclusion
According to the findings of the theoretical provisions analysis, as well as experience of organizing students' independent learning activity, a research and experimental program was created and tested through program-targeted activity-productive and evaluative-reflexive stages.
According to the results of testing in the educational process for the training students of technological and pedagogical specialties, the following were introduced: a comprehensive target program of gradual organization of independent educational activity of students with the use of ICT in the process of classroom and extracurricular independent and research work, informal training; didactic conditions and means of indirect and direct management of students' independent learning activities; technological models of application of ICT tools.
Following the completion of the experimental work, quantitative, qualitative, and statistical analyses revealed a tendency for positive changes in the levels of organization of independent educational activity of students of technological and pedagogical specialties, both by individual criteria and in general. Positive and statistically significant changes were achieved in the system of investigated activity by motivational (+7.2 units), content (+6.8 units), operational (+9 units), and productive (+7.9 units) criteria. No significant changes occurred in the control groups.
The results of the study can be used in practice of organizing the educational process at the technological-pedagogical faculties, which was reflected in the testing of work in institutions of higher pedagogical education.
The study does not address all aspects of the issue of using ICT to organize the educational process. The content of students' independent learning activities in the distance, dual, and e-learning systems, the development problem of educational mobility and educational autonomy of students, creation of a holistic educational environment that provides self-actualization and self-realization of personality, forms the capacity for self-learning and self-improvement during the life need further development.
Recommendations
According to the concept developed, students can get started with the resource in two ways: (1) To select a course from the general list of subjects contained in the appropriate tab; the link redirects it to a page listing all modules and discipline sections broken down by semester (according to the program). If the same discipline is studied under different programs in the same specialties (areas), the student is offered to choose his specialty (area) (2) The student chooses his course and semester. The following link redirects it only to the relevant sections or modules of this semester's discipline. If the same discipline is studied under different programs in the same specialties (areas), the student is offered to choose his specialty (area).
When going to a page of a discipline, the student can get acquainted with the amount of educational content presented. The tab of a discipline indicates the purpose of the discipline, a list of sections and titles of topics that are taught in this semester, with the number of lectures, laboratory, and practical classes. Each type of lesson (lecture, practical, and laboratory) is numbered, and the title, except the number, contains the subject of the lesson, and it is designed as an active link to an electronic resource with the option to view and download content. Each topic is completed with a system of tasks for independent work, didactic tools (hyperlinks to information resources, textbooks, manuals, reference books, guidelines and samples of completed independent work, etc.).
Students are offered test tasks of varying levels of difficulty for each topic, indicative tasks for control papers. Depending on the needs of the discipline, the tab may contain multimedia materials for student viewing (instructional video, video lectures, video lessons, models of phenomena or processes, presentations), requirements, recommendations, and examples of educational projects, as well as a set of hyperlinks to educational resources or cloud services. | 5,213.6 | 2022-08-17T00:00:00.000 | [
"Computer Science",
"Education"
] |
Reprogramming Glia Into Neurons in the Peripheral Auditory System as a Solution for Sensorineural Hearing Loss: Lessons From the Central Nervous System
Disabling hearing loss affects over 5% of the world’s population and impacts the lives of individuals from all age groups. Within the next three decades, the worldwide incidence of hearing impairment is expected to double. Since a leading cause of hearing loss is the degeneration of primary auditory neurons (PANs), the sensory neurons of the auditory system that receive input from mechanosensory hair cells in the cochlea, it may be possible to restore hearing by regenerating PANs. A direct reprogramming approach can be used to convert the resident spiral ganglion glial cells into induced neurons to restore hearing. This review summarizes recent advances in reprogramming glia in the CNS to suggest future steps for regenerating the peripheral auditory system. In the coming years, direct reprogramming of spiral ganglion glial cells has the potential to become one of the leading biological strategies to treat hearing impairment.
INTRODUCTION
It is estimated that disabling hearing loss affects 360 million people worldwide, which is over 5% of the world's population (Olusanya et al., 2014;World Health Organization, 2015). This makes hearing loss the most prevalent form of sensory impairment (Gaylor et al., 2013;Müller and Barr-Gillespie, 2015). Hearing disability is also widespread across all age groups; 0.3% of newborns, 5% of people by the age of 45 and 50% of people by the age of 70 experience some form of congenital or acquired hearing loss (Kral and O'Donoghue, 2010;Sprinzl and Riechelmann, 2010). Many individuals suffering from impaired hearing also experience a significant decrease in quality of life and are more likely to suffer from depression (Mulrow et al., 1990). Therefore, there is a pressing need to discover new strategies to repair hearing.
The auditory system works by converting sound waves into electrical signals that are transmitted to the brain. The tympanic membrane at the end of the external ear canal conveys vibrations in the air to the small bones, or ossicles, of the middle ear. These vibrations are conducted through the ossicles and passed onto the oval window, which separates the middle and inner ears.
The movement of the oval window causes disturbances in the fluid of the cochlear duct and these fluctuations are detected by mechanosensory hair cells in the organ of Corti, which transform this information into chemical signals received by the dendrites of primary auditory neurons (PANs) that emerge from the spiral ganglion (Figure 1). The hair cells of the organ of Corti form one row of inner hair cells followed by three rows of outer hair cells. Inner hair cells are innervated by Type I PANs, which compose 90-95% of PANs, are large and myelinated whereas outer hair cells are innervated by Type II PANs, which compose 5-10% of PANs, are small and unmyelinated (Nayagam et al., 2011). Type I afferents are the primary receptors for auditory signaling. Unfortunately, less is known regarding Type II function, however, it appears strong acoustic stimulation is required for activation (Weisz et al., 2009). These glutamatergic PANs relay an electrical impulse from the cochlea, the sensory organ for hearing, to the auditory centers in the brain through the eighth cranial nerve (Appler and Goodrich, 2011). There are two primary categories of hearing loss based on the location of pathology: conductive and sensorineural. The former includes forms of impairment in conveying sound waves through the outer or middle ear. The latter includes forms of impairment resulting from damage to the components of the cochlea, including hair cells and/or PANs (Liberman, 2017). Sensorineural hearing loss can manifest after viral infection, exposure to otherwise lifesaving ototoxic drugs, noise and/or aging (White et al., 2000;Kral and O'Donoghue, 2010;Olusanya et al., 2014;Ruan et al., 2014;Liberman, 2017). Traditionally it was thought that PANs could only become damaged as a result of hair cell loss; a form of PAN damage known as secondary degeneration (Bohne and Harding, 2000;McFadden et al., 2004;Stankovic et al., 2004;Sugawara et al., 2005). However, it is now understood that PAN loss can occur independent of damage to hair cells; a form of PAN damage known as primary degeneration (Kujawa and Liberman, 2006;Lin et al., 2011;Makary et al., 2011). The primary degeneration of PANs leads to a condition known as auditory neuropathy, where the mechanosensory hair cells of the cochlea remain intact but PANs are lost. Primary degeneration can develop as a consequence of glutamate excitotoxicity (Zheng et al., 1997), noise exposure (Lin et al., 2011;Furman et al., 2013), and/or genetic defects (Angeli et al., 2012). This type of sensorineural damage, is one of the leading features of presbycusis, or age-related hearing loss, and is characterized by difficulty hearing in noisy settings (Kujawa and Liberman, 2015). In fact, although presbycusis can present itself through four pathological categories; sensory, neural, metabolic and mechanical, where metabolic refers to degeneration of the stria vascularis and mechanical refers to hardening of cochlear membranes. Neuronal loss is characterized as the best indicator for age-related hearing degeneration (Schuknecht and Gacek, 1993).
Once PANs are lost they will never regenerate, hence regenerative medicine techniques hold enormous potential for the recovery of PANs in the spiral ganglion. This is especially significant considering that modern clinical solutions for hearing impairment rely solely on medical devices such as hearing aids and cochlear implants (Müller and Barr-Gillespie, 2015). These assistive technologies have provided a much-needed boon to the lives of patients, however, they are only suitable for a limited population of hearing impaired individuals and even when compatible do not resemble natural hearing or make music enjoyable, as reported by users (Briggs, 2011). One of the main factors involved in the effectiveness of cochlear implants is the health and numbers of PANs (Yagi et al., 2000). Hence, to improve the quality of life for individuals suffering from hearing impairment there needs to be new interventions that (1) address the population where current devices are not appropriate and (2) improve the quality of hearing toward a natural level. Biological strategies currently being investigated to replace and/or protect PANs include stem cell (Nayagam et al., 2013) and growth factor therapies Müller and Barr-Gillespie, 2015). Another option to consider is the direct reprogramming of resident cells in the spiral ganglion into PANs. To the best of our knowledge, other than the reprogramming of nonsensory epithelial cells into induced neurons (iN) from our group (Puligilla et al., 2010;Nishimura et al., 2014) and our recent reprogramming of neonatal glial cells (Noda et al., 2018) there have been no other attempts at direct reprogramming in the peripheral auditory system (PAS). This review will herein summarize the historical perspectives and recent advances made in direct reprogramming, within the context of regenerative medicine, to propose this strategy as a novel intervention for the treatment of hearing loss. As a second objective, this review aims to position the PAS as an informative model for the study of regenerative medicine both in vitro and in vivo.
GLIA WITHIN THE INNER EAR SPIRAL GANGLION OFFER AN ADVANTAGEOUS SOURCE FOR DIRECT REPROGRAMMING
It is important to consider the target cell type for direct reprogramming since cells acquire lineage specific epigenetic markers during development (Ho and Crabtree, 2010;Vierbuchen and Wernig, 2012). These genetic signatures may partially explain why it is apparently more difficult to transdifferentiate distantly related lineages (Vierbuchen et al., 2010). Glial cells were first found to be easily converted into neuron-resembling cells through expression of a small number of transcription factors including Pax6 alone (Heins et al., 2002) or Neurog2 and Ascl1 (Berninger et al., 2007). Subsequently, other combinations of transcription factors, such as Brn2, Ascl1, and Myt1l (Vierbuchen et al., 2010) or even Ascl1 alone (Chanda et al., 2014), were found to be able to convert more distant cell types into neurons. These data indicated that it was possible to coax cells to become a cell type with a very different history using only a few, or even one, transcription factor(s). However, it appeared that iNs produced by fibroblasts take longer to mature than glial-derived iNs, presumably due to additional stages required in converting cells from a more distant lineage (Berninger et al., 2007;Heinrich et al., 2011;Wapinski et al., 2013;Chanda et al., 2014). If there are, in-fact, distinct stages involved, at least theoretically, it would be easier for glia to progress through these switches in state since both neurons and glia in the CNS naturally derive from the same population of neural progenitor cells (Bertrand et al., 2002). In fact, mutations in Ascl1 and Neurog2 result in premature development of astrocytic precursors instead of neural precursors, and expression of Ascl1 both simultaneously commits progenitors to a neural fate and inhibits the glial developmental program (Bertrand et al., 2002). Adult pools of neural progenitor cells in the subventricular zone and hippocampal subgranular zone, which express Ascl1, also display glial characteristics and radial glia are a source of neurons during development (Malatesta et al., 2000;Doetsch, 2003;Kriegstein and Alvarez-Buylla, 2009). In the zebrafish retina, Müller glia act as a population of latent neural stem cells that can be activated after lesion to replace retinal neurons (Raymond et al., 2006). This process is dependent on upregulation of Ascl1 (Ramachandran et al., 2010), indicating that normal processes of development and repair from damage can force glial-like cells to undergo transdifferentiation into neurons. Unfortunately, in the PAS no analog exists; however, multipotent stem cells have been discovered within the inner ear; in the utricle (Li et al., 2003) and in the spiral ganglion (Oshima et al., 2007;Zhang et al., 2011;Diensthuber et al., 2014;Li et al., 2016;McLean et al., 2016). These cells have the potential to form neurites, develop synapses and express neuronal markers in vitro, but it is unclear whether they naturally repopulate the spiral ganglion post-injury (Li et al., 2003(Li et al., , 2016. Nevertheless, given the similar history and location of glia in the spiral ganglion these cells likely have the highest conversion potential in regenerating auditory neurons to restore hearing. In fact, we have recently published an analysis of the transcriptome upon neuronal induction of spiral ganglion glial cells where we observed a marked upregulation of key neuronal signatures and downregulation of key glial signatures, indicating the high potential of reprogramming glial cells into neurons (Noda et al., 2018).
Glia are the support cells of the nervous system. They comprise at least 50% of the cells in the brain and 80% of the cells in the peripheral nerves (Rowitch and Kriegstein, 2010;Zuchero and Barres, 2015). In the brain, macroglia are derived from the same precursors as neurons. Early in development neuroepithelial progenitor cells differentiate into radial glia and these cells are converted first into neurons and then into astrocytes and oligodendrocytes (Malatesta et al., 2000). In the PAS on the other hand, glia and sensory neurons arise from different embryonic sources, the neural crest and the otic placode, respectively (D' Amico-Martel and Noden, 1983;Sandell et al., 2014). These migratory neural crest cells and neural precursors work in tandem during morphogenesis for the proper development of the cochleovestibular nerve (Sandell et al., 2014). In the spiral ganglion, the two major types of glia are satellite cells, which populate the area surrounding the cell bodies of sensory neurons, and Schwann cells, which migrate toward axonal projections (Figure 1) (Zuchero and Barres, 2015). Glia promote neuronal survival, provide nutrients and metabolic support, remove and recycle neurotransmitters, shape synapses, and form myelin sheaths (reviewed in Zuchero and Barres, 2015). In the CNS, astrocytes are additionally critical in regulating blood flow and in forming the blood-brain barrier. Although, the cochlea is similarly protected by a blood-labyrinth barrier, resident glial cells do not appear to be involved (Shi, 2016).
Glia are also critical in the response to neural injury and disease. In the CNS, damage caused by acute injury, infection, ischemia and neurodegeneration results in an intricate balance between inflammation, cell death and debris removal (reviewed in Burda and Sofroniew, 2014). One hallmark feature of CNS insult is the proliferation of astrocytes. This process, known as reactive gliosis, results in the formation of a glial scar that prevents the spread of inflammation and protects viable cells (Faulkner et al., 2004). Unfortunately, recent studies have suggested that some reactive glia may play an emerging role in neurotoxicity (Liddelow et al., 2017;Qian et al., 2017) and old glial scars are also believed to inhibit axonal regeneration both physically and chemically through the release of extracellular matrix products (Kimura-Kuroda et al., 2010). Interestingly, there is alternative evidence to suggest that reactive gliosis may be involved in directing uncommitted cells toward a neurogenic fate (Robel et al., 2011). Alternatively, the increased incidence of transdifferentiation following reactive gliosis might instead be related to the post-injury environment since reprogramming experiments are similarly found to be more successful when induced after injury (Heinrich et al., 2014;Chiche et al., 2016;Mosteiro et al., 2016). The post-injury environment is associated with the release of inflammatory cytokines, which in-turn are responsible for several reactive processes such as triggering glial scarring and activating endogenous neural stem cells (Alvarez-Buylla and Garcia-Verdugo, 2002;Arvidsson et al., 2002;Yamashita et al., 2006;Chen et al., 2017). Hence, reprogramming glia may be useful for eliminating, or at least shrinking, glial scars by converting these cells into neurons.
In the PAS, damage can be caused directly to auditory neurons or indirectly through the loss of hair cells (Kujawa and Liberman, 2015). Reminiscent of reactive gliosis in the CNS, in the immediate period following injury there is marked proliferation of glial cells expressing Sox2 (Lang et al., 2011(Lang et al., , 2015. These Sox2-expressing glial cells display characteristics similar to neural progenitor cells, comparable to the neurogenic cells found after injury in the CNS (Lang et al., 2015). Despite the fact that neural stem cell niches are found in the spiral ganglion and cranial nerve VIII, unlike the CNS there is no evidence to suggest that there is recovery of neurons after damage (Oshima et al., 2007;Zhang et al., 2011;Diensthuber et al., 2014;Li et al., 2016). Perhaps these cells in the spiral ganglion play more of a neuroprotective role rather than replacing lost neurons. It is also not clear whether the PAS equivalent of reactive gliosis occurs after primary degeneration of PANs or if it only occurs after injury. Additionally, there is some overlap between the two systems since astrocytes and Schwann cells may migrate across the peripheral and central nervous system transitional zone following damage along the cochlear nerve (Hu et al., 2014). Elsewhere in the PNS, neurons have been found to retain some regenerative capacity, a feature likely related to glial interplay since Schwann cell dysfunction in age is thought to play a role in limiting regeneration (Painter, 2017). In sum, glia provide a similar role in both the CNS and PAS. They present similar challenges for attempts to regenerate lost neurons, and could provide significant advantages in that success in one field could lead to translatable results in the other.
IN VITRO NEURONAL REPROGRAMMING AND CELLULAR TRANSPLANTATION
It was previously thought that somatic cells obeyed a strict program resulting in a static terminally differentiated state. However, recently it has become accepted that cells are not locked into a certain state but are amenable to changing conditions (Sieweke, 2015). In fact, transcription factors can remodel cells into other differentiated cell types, a technique known as direct reprogramming. This is a beneficial strategy since it can bypass the lengthy induced pluripotent stem cell (iPSC) stage and also consequently decrease the chance of tumorigenesis due to latent pluripotent cells (Kelaini et al., 2014). Reprogramming has had major success in converting various cell types into others; including pancreatic β cell islets (Zhou et al., 2008), brown adipose tissue (Kajimura et al., 2009), cardiomyocytes (Ieda et al., 2010), and neurons (Vierbuchen et al., 2010;Pang et al., 2011). Most of this work has been performed using a combination of transcription factors, miRNAs and small chemical compounds in vitro Cao et al., 2016;Masserdotti et al., 2016;Gao et al., 2017). Using these techniques, it has become theoretically possible to generate new tissue and potentially even organs from an individual's own cells with reduced tumourigenic side effects.
In vitro transdifferentiated cells can be used for the autologous transplantation of tissues, organs or cells. The earliest experiments demonstrating the transplantation of iNs to the CNS used transcription factor-based reprogramming to induce dopaminergic neurons from fibroblasts for the treatment of Parkinson's disease. These cells were able to successfully integrate into the nervous circuit (Caiazzo et al., 2011), and even resulted in some functional recovery in an animal model of Parkinson's disease (Kim et al., 2011). In the PAS, the only study to-date involving in vitro differentiated iNs and subsequent transplantation relied on a directed differentiation protocol to convert human iPSCs into glutamatergic neurons to be transplanted into guinea pig inner ears (Ishikawa et al., 2017). On a histological level these cells were incorporated into Rosenthal's canal, but circuit integration and recovery of auditory function were not assessed and the number of cells remaining after 2 weeks was significantly reduced, presumably due to the host system's immune response (Ishikawa et al., 2017). Other cases of transplantation in the spiral ganglion have used ESCs (Coleman et al., 2006) and iPSCs (Nishimura et al., 2009) or neurons extracted from other sources, such as embryonic dorsal root ganglion neurons (Hu et al., 2005). These studies demonstrated that it was possible for ESCs or iPSCs to differentiate into glutamatergic iNs that could form synapses with cochlear hair cells and could survive up to 4 weeks. However, these studies did not test recovery of auditory function or survival after a longer period.
Others have instead generated induced neural stem cells (iNSCs) in vitro for transplantation and differentiation in vivo. In the CNS, researchers have generated iNSCs by overexpressing some of the Yamanaka pluripotency factors and neuron-related transcription factors, such as Brn2, along with exposure to small molecules (Kim et al., 2011;Lujan et al., 2012). Ring et al. (2012) were able to directly generate iNSCs from fibroblasts using Sox2 alone (Ring et al., 2012). These Sox2 derived iNSCs were able to differentiate into various mature neuronal and glial subtypes when transplanted in the mouse brain (Ring et al., 2012). Similarly, Lee et al. (2015) were able to differentiate blood cell derived iNSCs using GSK3 and SMAD inhibitors along with Oct4 overexpression into dopaminergic and nociceptive neurons when transplanted in vivo (Lee et al., 2015). Kim et al. (2014) could generate induced neural crest-like cells that could be differentiated into peripheral neurons and glia by overexpression of Sox10 when paired with canonical Wnt activation (Kim et al., 2014). In the PAS, a handful of studies have indicated that the differentiation of progenitor or stem cells toward auditory neurons is a similarly promising strategy. iNs using this method have been shown to abundantly repopulate the auditory nerve and send extensions toward the sensory epithelium (Corrales et al., 2006;Shi et al., 2007). Hu et al. (2017) thoroughly analyzed neural stem cell derived neurons and discovered they were able to form functional synapses with cochlear nucleus neurons in vitro . Chen et al. (2012), using otic progenitors derived from hESCs, discovered that it was possible to restore some auditory function after transplantation (Chen et al., 2012), and Song et al. (2017) found that acquisition of neuronal properties from otic progenitors could be accelerated upon Neurog1 overexpression (Song et al., 2017). Hackelberg et al. (2017) similarly observed integration of differentiated neurons from human-derived neural progenitor cells when implanted into the guinea pig internal auditory meatus after induced auditory neuropathy (Hackelberg et al., 2017). To improve growth of neurites toward PAN targets, Hackelberg et al. (2017) simultaneously delivered a nanofibrous scaffold. However, these studies implanted cells only shortly after induced auditory neuropathy, hence the differentiation of iNs may be the result of an early post-injury environment. This temporary niche is supplied with growth factors and cytokines not normally present and has the potential to even stimulate ESCs transplanted at the internal auditory meatus portion of the auditory nerve to migrate toward Rosenthal's canal and the scala media (Sekiya et al., 2006). Therefore, it is possible that this environment may have a profound impact on transplantation with vastly different results than a late-injury model of auditory neuropathy since this environment is often inhibitory (Lang et al., 2008). These approaches that generate multipotent precursors (e.g., iNSCs, otic progenitors) are useful because these cells are expandable, they have a reduced potency such that they can only differentiate into a limited number of cell types, and are amenable to the environmental cues in the transplanted setting. However, the proliferative capability of these cells is still of concern.
Despite the usefulness of in vitro reprogrammed cells, there are major limitations to transplantation. Cellular transplantation is an invasive process that can result in death of both cells from the original tissue and the transplanted ones. Therefore, cellular transplantation requires tremendous numbers of cells to maximize the yield of viable cells that integrate into host tissues. Fortunately, Rigamonti et al. (2016) have recently developed a large-scale production method to differentiate iPSCs into mature cortical or motor neurons using a suspension culture system (Rigamonti et al., 2016). These iNs could form integrated neural networks and generate synchronized action potentials within the culture system, thereby addressing the need to create large amounts of iNs. However, a second obstacle for transplantation efforts using in vitro differentiated cells is immunogenicity. A characteristic of iPSCs and ESCs maintained in culture for long periods of time is the development of aberrant surface proteins which are passed onto differentiated cells and trigger the immune system (as reviewed in Tang and Drukker, 2011). This problem is supposedly due to the incomplete conversion of cells in vitro and is not observed with in vivo reprogramming (Tang and Drukker, 2011). Hence the completeness of conversion may be related to extrinsic factors provided to cells within the in vivo cellular niche. This consequence of ectopically transplanted cells is also dependent on cell type, since it does not always result in an immune response (Tapia and Schöler, 2016). In sum, in vitro lineage conversion is advantageous for understanding the molecular features of transdifferentiation; however, several difficult obstacles for the transplantation of in vitro derived cells limits its usefulness as a therapy for humans. A more promising solution that avoids some of these issues is the in vivo reprogramming of spiral ganglion glia into neurons.
IN VIVO REPROGRAMMING OF GLIA AND FUNCTIONAL STUDIES
In vivo reprogramming refers to cellular reprogramming that takes place within a living organism through direct intervention methods such as gene therapy. In vivo reprogramming takes advantage of the microenvironments that already exist in the body and bypasses some of the complications associated with cell grafting. As an added benefit, in vivo reprogramming is perhaps more efficient than in vitro reprogramming since in vivo strategies (Qian et al., 2012;Liu et al., 2015) appear to be more successful in converting cardiomyocytes and neurons than in vitro strategies (Ieda et al., 2010;Vierbuchen et al., 2010;Pang et al., 2011;Heinrich et al., 2015). Attempts at reprogramming glia into neurons in vivo have largely focused on two major strategies: converting glia into neuroblasts and differentiating these cells into iNs or directly converting glia into iNs (Smith and Zhang, 2015;Smith et al., 2016Smith et al., , 2017 (Figure 2A and Table 1).
Neuroblasts are the expandable precursors to neurons, hence by converting resident glial cells into neuroblasts it is possible to increase the number of cells while simultaneously creating new neurons in vivo. The generation of neuroblasts in vivo can be achieved by the ectopic expression of the Sox2 transcription factor, both in the brain (Niu et al., 2013(Niu et al., , 2015 and in the spinal cord (Su et al., 2014;Wang et al., 2016). When animals concurrently overexpress neurotrophic factors such as BDNF and noggin or are orally administered the histone deacetylase inhibitor valproic acid, these neuroblasts are found to differentiate into iNs (Niu et al., 2013(Niu et al., , 2015Su et al., 2014). This method of creating iNs through a multipotent neuroblast intermediate involves guiding glia through distinct cell stages. Transduced cells first become neuroprogenitor cells that express Ascl1. They develop into Doublecortin expressing neuroblasts and commit to a neuronal fate (Niu et al., 2015). They then mature when supplied with exogenous neurotrophic factors. iNs produced from this step-wise differentiation protocol using Sox2 could also reliably generate action potentials and form synapses with endogenous neurons. This technique, which both increases the number of source cells while creating functional neurons can be useful for neuronal regeneration approaches. It is unclear if Sox2 could drive the conversion of peripheral glia into neuroblasts since the upregulation of Sox2 is a characteristic response after injury in the PAS for glial proliferation (Lang et al., 2011). Additionally, reactive gliosis typically results in extensive amounts of proliferating glia, hence this strategy may not be necessary to create sufficient numbers of iNs in the inner ear. However, reprogramming glia to neuroblasts remains an option if direct conversion in the spiral ganglion yields uncharacteristic low numbers of iNs or there are too few source cells remaining in the spiral ganglion, as seen in older animals (Keithley et al., 1989).
Many other researchers have used neurogenic transcription factors to reprogram glial cells directly into iNs. The overexpression of Ascl1, Brn2, and Myt1l converted parenchymal astrocytes into neurons in the adult mouse striatum (Torper et al., 2013). Similar to the in vitro studies on neuronal reprogramming, Ascl1 on its own also converted midbrain astrocytes and reactive astrocytes from the subventricular zone (Faiz et al., 2015) into functional neurons. NeuroD1 alone was also found to convert astrocytes into mature neurons (Guo et al., 2014;Brulet et al., 2017). Aside from astrocytes, NG2 glia have been targeted as a potential source cell type. NG2 glia are the precursors to oligodendrocytes and could be converted into iNs by NeuroD1 (Guo et al., 2014), Sox2 (Heinrich et al., 2014) or the combination of Ascl1, Lmx1a, and Nurr1 (Torper et al., 2015). In a cortical injury mouse model, Neurog2 and the addition of growth factors to nonneural cortical cells was sufficient for cells to adopt a neuronal fate (Grande et al., 2013). Interestingly, different areas of the brain appeared to have characteristically different responses. Neurog2-transfected cells in the striatum reliably developed into both glutamatergic and GABAergic iNs, whereas cells in the neocortex only developed into glutamatergic iNs. This difference in reprogramming suggests that local environmental cues can have a considerable effect on the outcome of conversion. Alternatively, this may be the result of a developmental effect. Cells may become regionally primed toward neighboring neural subtypes through the process of development and this phenotypic preference materializes during reprogramming Masserdotti et al., 2015;Chouchane et al., 2017). Regardless of the mechanism, unfortunately, both areas regenerated less than 5% of the number of neurons lost from injury. Gascón et al. (2016) hypothesized that the low yields achieved after cellular reprogramming were the result of a switch in metabolism from aerobic respiration in glia to anaerobic respiration in neurons, and the failure to transition resulted in cellular death . Hence, they combined the expression of Neurog2 with Bcl2, an anti-apoptotic transcription factor, to increase iN yields from astrocytes. Instead of the predicted apoptotic pathway, Bcl2 appeared to aid in reprogramming by reducing lipid peroxidation, a marker of ferroptosis . Mosteiro et al. (2016) on the other hand showed that cells failing to reprogram undergo senescence and secrete cytokines that actually facilitate the reprogramming of other cells (Mosteiro et al., 2016). They found that by using a Bcl2 inhibitor they could selectively kill senescent cells and consequently decrease reprogramming efficiency (Mosteiro et al., 2016). From the results of Gascón et al. (2016) and Mosteiro et al. (2016) it is evident that cells transfected with reprogramming factors can fail to reprogram and instead enter an alternative pathway, whether that may be ferroptosis or senescence. However, it is not clear why some cells are successful at reprogramming whereas others are interrupted along the way. These studies are part of an emerging development in the reprogramming field to understand the molecular roadblocks that prevent conversion in hopes of facilitating reprogramming instead of inundating cells with neurogenic transcription factors. These examples that newly derived iNs from a variety of sources could form functional connections with endogenous neurons when reprogrammed in vivo are exciting developments, but further studies on the mechanisms preventing reprogramming are critical for developing strategies that can be efficient solutions for neurodegenerative diseases.
In studies of neural regeneration in the CNS, damage paradigms involve the use of transgenic mice or targeted lesions. This is a beneficial approach for diseases tied to a specific phenotype or pathology but not when the genetics are unknown and/or broad. In the PAS this problem is circumvented since it is possible to abolish hearing by specifically targeting the destruction of PANs through the use of the chemical ouabain (Yuan et al., 2014). The amount and delivery of ouabain is particularly important, because at higher concentrations it can also influence hair cells (Fu et al., 2012). Nevertheless, this method of selectively destroying endogenous PANs allows researchers to specifically focus on regeneration of neurons and hearing instead of other symptomatic effects. An additional advantage to reprogramming PAS glia into neurons is the relative homogeneity of PANs compared to the innumerable subtypes of neurons found in the CNS. In addition to simply generating neurons for regenerative medicine, it is also critical to differentiate these cells into the required subtype(s). Fortunately, based on studies in the CNS most astrocytes induced to convert in vitro have been found to retain regional specification consistent with the location where glial cells were derived. This leads to the corresponding creation of GABAergic neurons in the cortex (Masserdotti et al., 2015), and both GABAergic and glutamatergic neurons in the midbrain . The neural subtypes formed when spiral ganglion glial cells are converted has yet to be examined; however, based on the work completed in the CNS it is likely that these cells will become glutamatergic neurons, which is consistent with the neuronal subtype of PANs (Reijntjes and Pyott, 2016). In fact, spiral ganglion derived neural stem cells almost exclusively differentiate into spiral ganglionlike glutamatergic cells (Li et al., 2016). Fortunately for the purposes of reprogramming in the inner ear this strategy should be sufficient to restore hearing since all PANs are glutamatergic neurons. Therefore, the in vivo strategies already succeeding in the brain can be applied to the PAS as-is without need for refinement of neural subtype.
Additionally, reprogramming in the PAS is advantageous since there are already well-established methods that can be easily implemented to robustly validate the integration of reprogrammed iNs into pre-existing circuits. These types of rigorous functional studies are critical to ensure that iNs are working as intended and rescuing the impaired phenotype rather than simply adding cells. Functional studies of reprogramming in the CNS can be difficult since many neurodegenerative conditions involve widespread damage, such as in Alzheimer's disease, and thus require iNs to form extensive connections with endogenous neurons in far-reaching areas of the brain (Goldman, 2016). This is not to mention the sheer number of iNs that would be required to rescue the phenotype of Alzheimer's disease. Given our current state of technology, it is not clear how to both broadly reprogram glia in the brain and prevent offtarget reprogramming elsewhere in the body. On the other hand, diseases like Parkinson's and Huntington's where lost neurons are restricted to a single phenotype and/or location may benefit from the reprogramming techniques currently available. There is evidence of some motor rescue in humans with Parkinson's disease when grafted with fetal dopaminergic tissue (Cicchetti et al., 2009;Barker et al., 2015), but this does not lead to stable recovery and typically results in dyskinesias. The instability of grafted tissue may be related to heterologous transplantation since dopaminergic neurons derived from autologous iPSCs in vitro can stably reinnervate the host brain and rescue some motor function when implanted in non-human primates (Hallett et al., 2015). Recently, two breakthrough studies have shown that striatal astrocytes can be reprogrammed into dopaminergic neurons in vivo. These induced dopaminergic neurons could reliably generate action potentials and rescue some motor behavior in mouse models of Parkinson's disease (di Val Cervo et al., 2017;Yoo et al., 2017). These studies used a combinations of familiar transcription factors and/or microRNAs ( Table 1). In a unique approach, Yoo et al. (2017) also supplemented gene delivery in the mouse striatum with gold nanoparticles that were affected by an electromagnetic field for 3 weeks (Yoo et al., 2017). Stimulation by an electromagnetic field was thought to increase expression of proteins that influenced the chromatin state, thus robustly activating neuronal genes. Previously only glutamatergic or GABAergic neurons had been created in vivo, hence this elusive feat demonstrated by two labs simultaneously indicates the tremendous innovation happening in the field of regenerative medicine. In comparison to the brain, the spiral ganglion in the PAS is a physically small and restricted niche that is separated from the rest of the body by the blood-labyrinth barrier and is composed of only glutamatergic neurons. Although, PANs only form connections at two ends, with the hair cells of the cochlea and the neurons of the cochlear nucleus of the brain, these cells form networks in a precisely organized tonotopic layout (Appler and Goodrich, 2011). Further complexity is added when considering spontaneous discharge rate, activation threshold, and sound intensity coding of PANs, which inform the termination patterns of PANs in the cochlear nucleus (Kawase and Liberman, 1992). However, functional analyses of the PAS can be relatively easily evaluated using objective audiometric tests. The auditory brainstem response (ABR), is a non-invasive recording of electrical activity transmitted between cranial nerve eight and the brainstem (Davies, 2016). It is logged using electrodes placed on the surface of the scalp (Guo et al., 2014). ABR waveforms have a distinctive five wave pattern that can be used to identify the location of pathology, and therefore can be used to test integration of iNs into the auditory circuit. In the case that an auditory evoked potential cannot be detected by the ABR it is possible to use electrically evoked compound action potentials to test the electrical activity of the auditory nerve independent of auditory activity (Ramekers et al., 2015). Although this is an invasive strategy that requires implanting electrodes in the cochlea and brain, it can be useful to test whether iNs are electrically active but suffer from functional connectivity between the cochlea and cochlear nucleus. In either case, reprogramming in the PAS can be robustly tested using powerful audiometric techniques. These features make the PAS an attractive opportunity to examine reprogramming techniques on a smaller scale with equally landmark implications as studies in the CNS.
CHALLENGES AND FUTURE DIRECTIONS
Building upon these foundational studies on direct neuronal reprogramming of glia in the CNS, the direct reprogramming of spiral ganglion glial cells into PANs in the PAS appears likely to be a feasible strategy to restore hearing. Ample in vitro and in vivo evidence indicate that glia are amenable to conversion into functional neurons. In terms of delivery to the spiral ganglion, proneurogenic genes can be administered using adeno-associated viruses since they have low toxicity and immunogenicity while being safe for human usage (Mueller and Flotte, 2008). However, for strategies such as this to be useful, the success of conversion in aged mice will need to be tested since the most likely recipient for regenerative medicine efforts will be adults. Ahlenius et al. (2016) have shown that cells acquired from older mice display senescence, overexpress the transcription factor Foxo3 and are more difficult to reprogram in vitro (Ahlenius et al., 2016). Directly reprogrammed neurons additionally retain age-related signatures, which may include nucleocytoplasmic defects that can critically alter the cellular phenotype in vitro, although this has not yet been observed in vivo (Mertens et al., 2015). On the other hand, Mosteiro et al. (2016) have shown that senescent cells secrete cytokines which increase the reprogramming efficiency of nearby cells when transduced with transcription factors in vivo (Mosteiro et al., 2016). Therefore, more research is needed on in vivo reprogramming in adult cells to elucidate the effectiveness of conversion on aged cells.
In the case that in vivo reprogramming can create suitable numbers of iNs, there is still the issue of forming functional synaptic connections with the mechanosensory hair cells of the cochlea and the brainstem. Given the nature of neural connections between the cochlea and the brainstem, it is likely that there will be equal or even greater success at reprogramming glia into neurons and circuit integration in the PAS than the CNS. This is because PANs are glutamatergic and have a single connection to the cochlea and another to the auditory center of the brain. A more difficult task in the PAS will be to establish tonotopic connections, which will be critical in restoring naturallike hearing ( Figure 2B). It might be necessary to combine direct reprogramming with the delivery of neurotrophic factors through an osmotic pump (Sly et al., 2012) or a cell-based therapy (Zanin et al., 2014) to induce axon pathfinding and synapse formation. Hence, more work will need to be done to see whether reliable neural connections are formed. If successful, the PAS has the potential to become a model system to test regenerative medicine approaches for many neurodegenerative diseases that would benefit from a gene therapy approach to cell regeneration.
AUTHOR CONTRIBUTIONS
SM, C-LZ, and AD: conceptualization and writing. AD: supervision and funding. | 8,745.8 | 2018-03-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
On the computational aspects of Charlier polynomials
Abstract Charlier polynomials (CHPs) and their moments are commonly used in image processing due to their salient performance in the analysis of signals and their capability in signal representation. The major issue of CHPs is the numerical instability of coefficients for high-order polynomials. In this study, a new recurrence algorithm is proposed to generate CHPs for high-order polynomials. First, sufficient initial values are obtained mathematically. Second, the reduced form of the recurrence algorithm is determined. Finally, a new symmetry relation for CHPs is realized to reduce the number of recurrence times. The symmetry relation is applied to calculate 50% of the polynomial coefficients. The performance of the proposed recurrence algorithm is evaluated in terms of computational cost and reconstruction error. The evaluation involves a comparison with existing recurrence algorithms. Moreover, the maximum size that can be generated using the proposed recurrence algorithm is investigated and compared with those of existing recurrence algorithms. Comparison results; indicate that the proposed algorithm exhibits better performance because it can generate a polynomial 44 times faster than existing recurrence algorithms. In addition, the improvement of the proposed algorithm over the traditional recurrence algorithms in terms of maximum-generated size is between 19.25 and 42.85.
On the computational aspects of Charlier polynomials Alaa M. Abdul-Hadi 1 , Sadiq H. Abdulhussain 1 * and Basheera M. Mahmmod 1 Abstract: Charlier polynomials (CHPs) and their moments are commonly used in image processing due to their salient performance in the analysis of signals and their capability in signal representation. The major issue of CHPs is the numerical instability of coefficients for high-order polynomials. In this study, a new recurrence algorithm is proposed to generate CHPs for high-order polynomials. First, sufficient initial values are obtained mathematically. Second, the reduced form of the recurrence algorithm is determined. Finally, a new symmetry relation for CHPs is realized to reduce the number of recurrence times. The symmetry relation is applied to calculate ,50% of the polynomial coefficients. The performance of the proposed recurrence algorithm is evaluated in terms of computational cost and reconstruction error. The evaluation involves a comparison with existing recurrence algorithms. Moreover, the maximum size that can be generated using the proposed recurrence algorithm is investigated and compared with those of existing recurrence algorithms. Comparison results; indicate that the proposed algorithm exhibits better performance because it can generate a polynomial 44 times faster than existing recurrence algorithms. In addition, the improvement of the proposed
PUBLIC INTEREST STATEMENT
Discrete orthogonal polynomials are utilized in different fields such as image processing, speech enhancement, steganography, and others. There are many types of discrete orthogonal polynomials, Discrete Charlier polynomials is wellknown and highly utilized in different applications. The existing algorithms have limitations in size when generated discrete Charlier polynomials. In this article, we present a new algorithm for that is able to generate discrete Charlier polynomial with high order and large size. Thus, large signal and image sizes can be analyzed. algorithm over the traditional recurrence algorithms in terms of maximumgenerated size is between 19.25 and 42.85.
Introduction
Discrete orthogonal polynomials have attracted considerable attention from researchers in numerous scientific fields, particularly speech and image analyses (S. H. Pee et al., 2017), because of they are robust to noise, do not exhibit redundancy, and do not require a priori information (S. H. Abdulhussain, Ramli, Hussain et al., 2019). Consequently, these polynomials have been extensively used in different applications, such as information hiding (Radeaf et al., 2019), face recognition (Akhmedova & Liao, 2019), edge detection (S. H. Abdulhussain, Ramli, Mahmmod et al., 2017), video content analysis (S. H. Abdulhussain, Rahman Ramli et al., 2019), and speech enhancement . In addition, basis functions of orthogonal polynomials can be used as an approximate solution for differential equations (Mizel, 2008). Hu (Ming-Kuei, 1962) introduced the first type of moments, namely, geometric moments. This type of moments are used as a descriptor for images and considered invariant to image manipulation, such as translation, rotation, and scaling. However, geometric moments exhibit information redundancy, and thus, are non-orthogonal, i.e., a signal cannot be reconstructed from these moments.
To address this issue, the concept of continuous orthogonal moments was introduced by Teague (Teague, 1980). For example, Fourier-Mellin (Sheng & Shen, 1994), Legendre (Chong et al., 2004), and Zernike (Khotanzad & Hong, 1990) moments are continuous orthogonal polynomials. Such polynomials improve signal representation by using moments and reduce the effect of noise. However, continuous orthogonal moments involve coordinate transformation and approximation, result in computational load and numerical errors. Accordingly, discrete orthogonal polynomials were introduced in Mukundan (Mukundan et al., 2001) to address the issues of continuous orthogonal polynomials. Hahn (Yap et al., 2007), Krawtchouk (Yap et al., 2003), Tchebichef (Mukundan et al., 2001), Charlier (Zhu et al., 2010 are examples of discrete orthogonal polynomials. To compute discrete orthogonal moments, discrete orthogonal polynomial coefficients (DOPCs) should be computed accurately.
The computation of DOPCs is computationally expensive and may lead to numerical errors due to the presence of a hypergeometric series and gamma functions. Thus, three-term recurrence (TTR) algorithms are utilized to compute DOPCs. The two types of recurrence relations are generally the TTRs in the n-and x-directions. However, different studies have shown that DOPC values result in numerical propagation error when used to compute DOPCs for high orders. Thus, new recurrence algorithms are presented to address the problem of numerical propagation errors for Tchebichef and Krawtchouk polynomials.
Studies performed on Charlier polynomials (CHPs) are divided into 1) recurrence relation algorithms and 2) moment computation algorithms. Research involving recurrence algorithms has been introduced into n-and x-direction recurrence algorithms. Both types of recurrence algorithms cannot generate high-order polynomials because of the initial values used and the number of recurrence times performed. The second stream of studies involves the computation of Charlier moments (CHMs) with reduced computational cost. However, these studies are based on either the n-or x-direction recurrence algorithm. To the best of our knowledge, there are no previous study has discussed the ability of the CHP. In addition, the main concerns of researchers are increasing the speed and minimizing the error (Razian & MahvashMohammadi, 2017). Motivated by the aforementioned issues and discussion, the current study aims to present a new recurrence relation to compute CHPs efficiently for high-order polynomials.
In this paper, an empirical study is performed to investigate the maximum order that can be generated using the existing algorithms and analyzing their problems. Due to the importance of the initial value and its effect on the computation polynomial coefficients, a sufficient initial values is proposed. In addition, a reduced form of the recurrence algorithm is presented to reduce the computation time as well as the propagation error. Lastly, we present a new symmetry relation that will reduce the numerical propagation error via the reduction of the recurrence times used.
The rest of this paper is structured as follows. The basic aspects of computing CHPs and the existing TTR algorithms are described in Section 2. The proposed recurrence algorithm is presented in Section 3. An experimental analysis is performed to evaluate the proposed recurrence algorithm in Section 4. Lastly, conclusions are drawn in Section 5.
Computation of Charlier polynomials and Charlier moments
CHPs are defined and CHMs are computed in this section. The existing TTR relation is also introduced.
The definition of CHP
The definition of CHP C p n x ð Þ for the nth order is obtained from (Zhu et al., 2010) as follows: where p represents the parameter of the CHP, and 2 F 0 is the hypergeometric series which is given by (Koekoek et al., 2010): where ðaÞ b represents the ascending factorial, which is also known as the Pochhammer symbol, and defined as follows: On the basis of Equations (1) and (2), CHP can be expressed as: CHP fulfills the orthogonality conditions such that : where ω C ðx; pÞ and ρ C ðn; pÞ are the weight function and squared norm of CHP C p n x ð Þ, respectively. The weight function is defined as follows (Jahid et al., 2019): and the squared norm ρ C ðn; pÞ is expressed as follows (Jahid et al., 2019): The computation of CHP coefficients (CHPCs) by using Equation (4) produces numerical instability. Therefore, a weighted and normalized CHP (WNCHP) is required (S. H. . The nth order of WNCHP is expressed as follows:
Definition of Charlier moments
Moments, also known as transform coefficients, are defined as scalar quantities used to represent signals without redundancy (Abdulhussain et al., 2019a;. To represent a 1D signal f ðxÞ in the moment domain of CHP, CHMs are computed as follows: where ϕ n represents the CHMs of the function f ðxÞ, Ord is the maximum order used to represent the signal. The signal f ðxÞ can be reconstructed from CHMs as follows: For a 2D signal f ðx; yÞ with a size of N 1 Â N 2 , 2D CHMs ϕ nm is computed as follows: n ¼ 0; 1; . . . ; Ord 1 and m ¼ 0; 1; . . . ; Ord 2 where Ord 1 and Ord 2 represent the maximum order utilized for signal characterization. To reconstruct the 2D signal f ðx; yÞ from CHMs, the process is performed as follows: Notably, the reconstructed signal is f ¼ f when all the moments are used in the reconstruction process.
Existing three-term recurrence algorithms
The computation of CHPCs by using Equation (8) is considered computationally expensive because of the utilization of the hypergeometric and gamma functions. Consequently, the TTR algorithm is used to compute CHPCs . Existing TTR algorithms are performed in the n-and x-directions. These TTR algorithms are presented and analyzed in detail in the following subsections.
Recurrence Algorithm in the n-direction
The TTR algorithm of WNCHP in the n-direction is presented as follows (Zhu et al., 2010): where n ¼ 1; 2; . . . ; N À 1; and x ¼ 0; 1; . . . ; N À 1, such that and initial values of TTR arê where x ¼ 0; 1; . . . ; N À 1. The TTR algorithm in the n-direction computes the WNCHP coefficients in the nth order by using the WNCHP values of the ðn À 1Þth and ðn À 2Þth orders. For the sake of simplicity the TTR algorithm in the n-direction is denoted by TTRn.
In practice, this TTRn algorithm is inapplicable to large signal sizes and high-order polynomials. Figure 1 shows the WNCHP for different size (N) and parameter (p) values. The coefficient values and distribution of WNCHP is affected by the value of the CHP parameter (p). When N ¼ 150, WNCHP is generated without any numerical instability; however, stable WNCHP coefficients cannot be generated when N ¼ 180 because of the numerical instabilities and accumulated errors resulting from the nature of the recursive computation of polynomial coefficients . In addition, numerical errors occur in TTR because the initial values are zero (as shown in Figure 1) due to the nature of the formula used. To solve this problem, a TTR algorithm in the xdirection is proposed.
In practice, the TTRx algorithm cannot handle signals with high orders and large sizes. Figure 2 shows the WNCHPs generated using the TTRx algorithm for different size (N) and parameter (p) values. When N ¼ 150, WNCHP is completely generated without numerical errors. However, the TTRx algorithm cannot generate firm WNCHP coefficients when N ¼ 180 because of numerical instability (see Figure 2(d)), and the zero values of the computed initials (see Figure 2(e) and 2(f)).
Proposed recurrence relation
The proposed algorithm for computing CHPCs is presented in this section. In the following subsection, initial value analysis, CHP identity, and the proposed TTR algorithm based on the initial value and CHP identity are discussed. Then, a summary of the proposed algorithm is provided for further clarification. For more clarification, the nomenclatures and the abbreviation used in this paper are listed in Tables 1 and 2, respectively.
Initial values
The computation of the initial value is essential and affects the other values of CHPCs.
Both TTR algorithms (i.e., TTRn and TTRx) rely on two sets of computed initial values. For example,Ĉ p 0 x ð Þ andĈ p 1 x ð Þ are the two initial sets used in TTRn. In practice, computing the initial values by using Equations (15) and (16) for TTRn or Equations (19) and (20) for TTRx is infeasible because of zero and incorrectly computed values (Figures 1(d) and 2(d)). Figure 1(d) shows that the values within the range n ¼ 0 and x ¼ 172; . . . ; 174 are generated with zerovalue coefficients. In addition, the values within the range n ¼ 0 and x ¼ 175; 176; . . . ; 179 are incorrectly generated due to the nature of the formula used. The incorrectly generated coefficient values are indicated as "Not a Number" in several testing environments, such as MATLAB, C++, and Python. In Figure 2(d), the values within the range x ¼ 0 and n ¼ Figure 1. WNCHP using TTRn for the following signal length and parameter values: From Equation (21), the factorial swiftly increases as the values of index x increase. In addition, the term p x increases as x increases. In practice, these terms are computed incorrectly. Note that, the same previous problem of TTRn is taken place for TTRx.
To overcome this problem, one initial value can be computed and a two-term recurrence can be used. This process can be performed by first computingĈ p 0 0 ð Þand then estimating the values of C p n 0 ð Þ recursively within the range n ¼ 1; 2; . . . ; N À 1 as follows: In practice, however, the initial valueĈ p 0 0 ð Þ cannot be computed for the parameter value p ¼ 1480. Thus, the best initial value that can be used to start computing CHPCs is a coefficient with a recognizable value. To achieve a recognizable initial value and considering that all the CHPCs rely onĈ p n 0 ð Þ, the coefficient ofĈ p n 0 ð Þ is inspected. Figure 3 shows the values ofĈ p n 0 ð Þ for different values of parameter p when N ¼ 300.
In 3, the peak value ofĈ p n 0 ð Þ for different parameter p values is located at n ¼ p À 1 and n ¼ p.
For example, the peak ofĈ Nevertheless, this initial value cannot be estimated for large values of parameter p due to exponentiation and factorial functions. The key solution to this problem is using the logarithmic function as follows: where logΓ is the logarithmic gamma function (Hart, 1978). Figure 4 presents the result ofĈ To compute the remaining CHPCs forĈ p n 0 ð Þ, a two-term recurrence relation can be obtained and used. For the range n ¼ p À 2; p À 3; . . . ; 0, the two-term recurrence relation is expressed as follows: For the range n ¼ p þ 1; p þ 2; . . . ; N À 1 is presented as follows: The two-term recurrence relation can be used to compute the CHPCs ofĈ n ¼ 0; 1; . . . ; N À 1
CHP identity and recurrence relation
In this section, we present the WNCHP identity that will reduce the numerical error of CHPCs based on TTR. This identity is used to compute CHPCs for one portion and then find CHPCs for the other portion. This identity is symmetric to the primary diagonal (n ¼ x).
The TTR algorithm
From the presented WNCHP identity and the computed initial values, CHPCs in P1 are computed using the modified TTRx algorithm, as follows:
Summary of the proposed algorithm
This section summarizes the steps of the proposed TTR algorithm to further understand the computation of WNCHP. The steps are summarized as follows.
Step 2-The value ofĈ p n 0 ð Þ are computed as follows.
The aforementioned steps are illustrated in Figure 6.
Experimental analysis
This section presents the performance evaluation of the proposed method in terms of computational cost, energy compaction, signal reconstruction ability, and maximum size of the generated polynomial.
Analysis of computational cost
+ The performance of the proposed algorithm in terms of computational cost in obtaining CHPCs should be demonstrated. The computational cost of the proposed algorithm is compared with those of existing algorithms, namely, TTRn and TTRx. In addition, a recent algorithm is included in Figure 6. Illustration of the steps of the proposed algorithm.
The computation time experiment is performed on TTRn, TTRx, and the proposed algorithm using different CHP sizes and parameter p ¼ N=2. Figure 7 shows the average computation time results for 10 runs. The time required to generate CHPCs using the TTRn and TTRx algorithms is comparable. For example, the time required to generate CHPCs using the TTRn and TTRx algorithms at N ¼ 2000 is ,0.619 s and ,0.607 s, respectively. In addition, the time required to generate CHPCs remarkably increases polynomial size increases when using the TTRn and TTRx algorithms because of 1) the number of computed coefficients, i.e., N Â N, and 2) the mathematical expression used in the computation of CHPCs. However, the computation time exhibits higher reduction than those of existing algorithms. For example, at N ¼ 2000, the computation time is ,0.014 s, i.e., the execution time improvement percentage (ETIP) is ,97.7%. Notably, ETIP is computed using the following formula (Hosny, 2008): where Time P and Time E are the computation times of the proposed and existing TTR algorithms, respectively. To further illustrate the advantage of the proposed algorithm, Table 3 lists the ETIP values of the proposed polynomial compared with those of existing recurrence algorithms (TTRn and TTRx). The proposed algorithm reduces the time required to generate CHP by ,97.70%. To further explain the obtained results, it is well-known that the expensive operation that influence the computation cost of the recurrence algorithm are multiplication, division, and recurrence times. Due to the employment of the identity relation in the proposed algorithm, only 50% of the coefficients are computed. Accordingly, 50% of the expensive operations have been excluded. For instance, at N ¼ 1000, there are 10,000 coefficients should be computed using the existing TTR algorithms. Each coefficient is computed by performing the TTR relation and each recurrence relation has parameters need to be computed. These parameters involve multiplication and division operation, which increase the execution time. On the other hand, for the proposed algorithm, only 5000 coefficients are computed, i.e., 50% of recurrence relations are excluded. Thus, the proposed algorithm reduces the number of operations required and it is considered effective in terms of computational complexity for a wide range of CHP sizes.
To emphasis the ability of the proposed algorithm in terms of the computational cost, the execution time speedup ratio (ETSR) is calculated between the proposed algorithm and TTRn, TTRx, and GSOP. Table 4 lists the ETSR between the proposed algorithm and existing algorithms. The ETSR is computed as follows: It can be inferred from Table 4 that the proposed algorithm outperforms the existing recurrence algorithms in terms of speedup ratio. For instance, when N is 500, the average ETSR is 44x over TTRn and TTRx. The obtained speedup ratio is due to the reduction of the recurrence times. In addition, the average speedup ratio of the proposed algorithm over the GSOP is 17906x. The GSOP algorithm is considered computationally overload because of the utilization of the nested loops in its implementation.
Analysis of polynomial order and energy compaction
The distributions of signal moments typically differ in each transform (Abdulhussain et al., 2019a). In addition, the arrangement of moment sequence indices n is important in retrieving significant signal information. Thus, the moment energy distribution for discrete Charlier transform (DCHT) should be known prior to the analysis of signal reconstruction. Such analysis can be performed by following the mathematical procedure presented in (Jain, 1989;Zhu et al., 2010), which is based on a stationary Markov sequence with zero mean and length N. Suppose that the matrix M is the covariance matrix with different covariance coefficients (r). This matrix is given as follows (Jain, 1989): After transforming the covariance matrix (M) into the Charlier transform domain, the diagonal of the transformed matrix S is used to represent the variance σ 2 l of the transform coefficients. To obtain matrix S, the following function is applied: where R refers to the WNCHP matrix, and ðÁÞ T represents the transpose operation of a matrix. Different values of parameter p and the coefficient covariance r are used in this experiment. The results of the experiment are listed in Table 5 by using p ¼ N=4; N=2; . . . ; and 3N=4 and r ¼ 0:85; and 0:95.
The maximum value of the variance (σ 2 l ) of DCHT is located at index l ¼ 0, and the variance decreases as the index value l increases. Consequently, the DCHT moment order, which can be used to reconstruct signal information, is given as follows: n ¼ 0; 1; 2; . . . ; N À 1.
Energy compaction is considered an important property in checking the performance of discrete transform. It is used to evaluate the tendency of CHP to transform a large fraction of signal energy into relatively few coefficients of moments. To check the effect of parameter value p on the energy compaction performance of CHP, the normalized restriction error (NRE) (Jain, 1989) is utilized as follows (Jain, 1989): where J m is the NRE, and σ 2 q is the arrangement of σ 2 l in descending order. NRE is computed using different values of parameter p and covariance coefficients r. Figure 8 presents NRE (J m ) as a function of the number of retained samples q. Two values of covariance coefficients (r) are used. EC is influenced by the CHP parameter p. For example, when parameter p increases from N=6 to N=3, the performance of CHP in terms of energy compaction increases because the NRE of p ¼ N=3 reaches the minimum value before p ¼ N=6. When the value of p increases to more than N=3, NRE is smaller than that of p ¼ N=3. Therefore, CHP at p ¼ N=3 uses less moments than the other values of p to reconstruct a signal efficiently.
Analysis of reconstruction error and maximum size
In this section, experiments are performed to evaluate the proposed algorithm against existing algorithms in terms of reconstruction error and maximum size can be generated. Different image sizes and CHP parameter values are used in the experiment. A cameraman image is selected as the test image for this experiment. The test image is resized several times to obtain I. For each image size,, WNCHP R is generated using TTRn, TTRx, and the proposed algorithm. Then, the image is transformed into the moment domain M. Thereafter, the image is transformed into the spatial domain I r . Lastly, the normalized mean square error (NMSE) between the input and reconstructed images is computed as follows (S. H. : where E n represents the NMSE between images I and I r . The image is resized three times (N ¼ 144; 160; and 256), and the three values of the CHP parameter are (p ¼ bN=3c; bN=2c; and ; b2N=3c). The experiment results of NMSE for TTRn, TTRx, and the proposed algorithm are presented in Figure 9.
As shown in Figure 9, the image can be fully reconstructed using TTRn, TTRx, and the proposed algorithm for image size N Â N ¼ 144 Â 144 and all the values of parameter p. However, for image size N Â N ¼ 160 Â 160, the image can be fully reconstructed using TTRn for parameter values bN=3c and bN=2c but fails to recover any image information for any order when the parameter value is b2N=3c. In addition, the reconstruction of the image by using TTRx can be fully performed for parameter values bN=3c and b2N=3c, and the reconstructed image exhibits an error when the parameter value is bN=2c for a moment value greater than 40 because of the zero values of the polynomial coefficients for high-order polynomials (see Figure 2). For image size N Â N ¼ 256 Â 256, image reconstruction by using TTRx fails to reconstruct the image for any parameter value due to the propagation error in the computation of CHP. Meanwhile, TTRn successfully reconstructs the image only for CHP parameter values bN=3c and b2N=3c but fails to reconstruct any image information for the parameter value of bN=2c. By contrast, the proposed algorithm can efficiently reconstruct the image for all the image sizes and CHP parameter values. The maximum size that can be generated is further investigated using the proposed polynomial. To find the maximum size that can generate CHP without propagation error for each algorithm, the following steps are performed for each algorithm: (1) Small values are set accordingly for polynomial order N and parameter p.
(2) The cameraman image I is resized to N Â N.
(3) CHP R with an order of N is generated.
(5) Image I r ¼ R T Â M Â R is reconstructed.
(6) The NMSE between I and I r is found.
(7) If the NMSE less than 10 À3 , then the size is increased; otherwise, the previous value of N is considered the maximum size that the algorithm can generate. Note for small image size the threshold is set to 18 Â 10 À3 Table 6 lists the results obtained from the aforementioned procedure. The results include the maximum size for TTRn, TTRx, and the proposed algorithms corresponding to different values of the CHP parameter p. The results show that the proposed algorithm is better than existing algorithms in terms of the size that can be generated for CHP with different values of parameter p. Improvement in the proposed algorithm and the best performance in existing algorithms (TTRn and TTRx) are also reported to for better illustration. The improvement values show that the minimum improvement is 19.25 at p ¼ 5N=6, which increases to 42.85 at p ¼ N=2.
Conclusion
This study introduces a new identity of WNCHP and the reduced form to compute the initial values of WNCHP. Then, they are used to present a new recurrence algorithm for the efficient computation of WNCHP with high order. The analyses of the recurrence relation and numerical instability of CHP are explained. The identity relation of CHP to the main diagonal is beneficial for reducing the number of recurrence times. The CHP plane is divided into two portions, CHPCs are computed for one portion, and then the identity relation is utilized to find the CHPCs of the second portion. The analysis clearly indicates that the proposed recurrence algorithm has significantly less computation load than existing recurrence algorithms. In addition to its ability to reduce numerical propagation errors, the proposed algorithm also accelerates the computational speed of CHPCs. Although the proposed recurrence algorithm achieves remarkable results, its performance can still be improved to generate higher-order polynomials by examining the causes of the zero values in the coefficients, particularly for parameter p values that are less than N=2. | 6,363 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Hydro-elastic Complementarity in Black Branes at large D
We obtain the effective theory for the non-linear dynamics of black branes---both neutral and charged, in asymptotically flat or Anti-deSitter spacetimes---to leading order in the inverse-dimensional expansion. We find that black branes evolve as viscous fluids, but when they settle down they are more naturally viewed as solutions of an elastic soap-bubble theory. The two views are complementary: the same variable is regarded in one case as the energy density of the fluid, in the other as the deformation of the elastic membrane. The large-D theory captures finite-wavelength phenomena beyond the conventional reach of hydrodynamics. For asymptotically flat charged black branes (either Reissner-Nordstrom or p-brane-charged black branes) it yields the non-linear evolution of the Gregory-Laflamme instability at large D and its endpoint at stable non-uniform black branes. For Reissner-Nordstrom AdS black branes we find that sound perturbations do not propagate (have purely imaginary frequency) when their wavelength is below a certain charge-dependent value. We also study the polarization of black branes induced by an external electric field.
Introduction and summary
The limit of large number of dimensions D concentrates the gravitational field of a black hole within a short distance ∼ 1/D of its horizon, leaving an undistorted background outside it [1,2,3]. It is then natural to expect that the black hole and its field can be replaced by a thin, effective membrane-like object in the background geometry. Equations for this effective membrane have been recently derived in [4,5,6,7,8,9,10], in different formulations and in regimes that overlap but do not entirely coincide. In particular, ref. [4] obtained fully covariant equations for static black holes, both in vacuum (R µν = 0) and in (Anti-)deSitter (R µν = ±Λg µν ), including leading order and next-to leading order terms in the 1/D expansion. This allows to consider fluctuations on horizon scales of order one, but also of order 1/ √ D, as is appropriate, as we explain below, for black branes.
These simple covariant equations were extended in ref. [6] to stationary black holes, but now restricted to vacuum and to leading order in the expansion. The first formulation to include time dependence was achieved in ref. [5], working to leading order and for vacuum black holes. It can describe horizon shapes that vary on length and time scales of order one, and velocities along the horizon that are also of order one -hence not yet capable of capturing black brane dynamics. As it happens, in order to obtain the time-dependent effective theory for the latter, rather than calculating the next-order corrections to the equations of [5], it is more efficient to solve for a black brane ansatz which readily yields a set of simple effective equations (for vacuum or Anti-deSitter) that can be easily studied and solved [8]. Similar specific ansatze have permitted to obtain new results for black strings and black rings [7,9]. In a sense, the existence of all these formulations reveals a trade-off between, on the one hand, the goal of a formulation with the highest generality, and on the other hand, the desirability of simple equations for specific systems, which are easy to derive and to use for obtaining new results. Hopefully, further advances will combine the best of all these approaches.
In all cases, the theory is naturally formulated using geometric variables that describe how the membrane is embedded in the background, plus velocity fields for motion (e.g., rotation) along the spatial directions of the horizon. Charge on the black hole brings in an additional effective field for the charge distribution on the membrane [10]. 1 A basic question is whether the equations of the effective theory can be understood in terms of familiar physics. Previous results suggest two different kinds of interpretation: • Refs. [4,6] find that stationary black holes at large D are solutions of an elastic theory: the effective membrane must satisfy the equation where K is the trace of the extrinsic curvature of the membrane, the factor accounts for the gravitational and Lorentz boost redshifts on the membrane, and the constant κ is the surface gravity of the black hole. When γ = 1 (e.g., static black holes in a Minkowski background) this is the familiar Young-Laplace equation for soap bubbles (membrane interfaces).
• The effective theory of large-D black branes must contain hydrodynamic features, since it is the non-linear theory of the lowest-frequency quasinormal modes of the brane in the large-D limit. These have been shown in [15,16] to be precisely, and only, the modes that at long-wavelengths produce hydrodynamic behavior on the brane [17,18,19].
How, then, are these elastic and hydrodynamic viewpoints reconciled? Put pictorially: are ripples on a black brane like pressure waves on a fluid, or rather like wrinkles on a membrane?
In this article we reformulate and extend the results in [8] to obtain the large-D effective theory of black branes-now including charge, in asymptotically flat (AF) or Anti-deSitter (AdS) spacetimes-in a formulation that incorporates the two perspectives in a complementary way. That is, we can write the equations for the dynamical evolution of the black brane in hydrodynamic form, but for stationary configurations they are more naturally viewed as soap-bubble-type equations like (1.1). These two views (none of which were manifest in [8]) are complementary in that the same variable is interpreted, in one case as the energy density of the fluid, in the other case as the 'height function' measuring the deformation of an elastic membrane. 2 Indeed this reflects a basic feature of black holes: the same variable that gives their mass also sets the horizon size.
In more detail, by solving the Einstein equations for black branes to leading order in the 1/D expansion, we obtain a hydrodynamic theory of a compressible, viscous fluid, with a conserved particle-number current (from the charge density on the brane). Even though the large-D expansion and the hydrodynamic gradient expansion are different, the effective fluid stress tensor contains only a finite number of terms beyond the viscous stress, indeed just a single term with two derivatives of the mass density. Reinterpreting the mass density variable as the local radius of the horizon, this term completes the extrinsic curvature for a membrane embedded in a background and yields eq. (1.1) for stationary configurations.
We restrict our analysis to black brane configurations, with spatial horizon topology R p × S D−p−2 in AF and R D−2 in AdS. The fluctuations occur along the extended brane 2 Despite similarities in wording, this is very different from the blackfold effective fluid on an elastic membrane [20], and also from the usual membrane paradigm [21], which does not possess these elastic aspects.
directions and have wavelengths ∼ 1/ √ D (which is the scaling of the sound speed on the brane). For AF branes the sphere is kept metrically round, and only its size varies. It might seem desirable to have a more covariant formulation that can account for fluctuations on scales ∼ 1/ √ D of the sphere, and more generally not only of black branes, but also of black holes with, e.g., a topologically spherical horizon S D−2 . However, the analysis of linearized perturbations of spherical black holes reveals that quasinormal modes with partial wave numbers ∼ √ D do not decouple from the far-zone background, so they are not amenable to an effective theory [15]. Therefore, the restricted covariance of our ansatz seems to be an inevitable feature of the effective dynamics for horizon wavelengths In addition to black branes with a charge coupled to a Maxwell-type 1-form potential, often dubbed Reissner-Nordstrom (RN) black branes, we have also obtained the effective theory of black p-branes charged under a (p + 1)-form potential. This extended p-brane charge cannot be redistributed along the worldvolume and therefore is not associated to any local degree of freedom for a particle-number density in the effective theory. Both types of charge modify the tension of the brane, and hence the elasticity of the effective membrane. They also change the pressure of the effective fluid, and can alter its stability, as indeed we find.
Additionally, it is also easy to incorporate an external electric field parallel to the brane, and study the polarizing effect it has on the brane. Similar black hole polarization in global AdS has been recently studied in [22].
The large-D effective theory of black branes and the hydrodynamic fluid/gravity theories relate and compare to each other in several respects: • The theories coincide in the regime of common validity of both -i.e., when, on the one hand, the results of [23,24] are expanded in the limit of large D, and on the other hand, the effective theory that we obtain is regarded as a hydrodynamic gradient expansion.
• The fluid/gravity theories allow to reconstruct all of the spacetime between the horizon and the boundary of AdS or Minkowski. The large-D effective theory would seem to be more limited since it reconstructs only a region of radial extent ∼ 1/D above the horizon. However, the large-D near-horizon dynamics is decoupled to all perturbative orders from the far-zone [15], so the spacetime out to the asymptotic boundary is simply the unperturbed AdS or Minkowski background.
• Both effective theories are limited to wavenumbers k D/r 0 (where r 0 is the horizon radius of the black brane). For hydrodynamics, this limitation is set by the temperature scale T ∼ D/r 0 of the fluid, while for the large-D expansion we regard it as coming from the size O(D/r 0 ) of radial gradients close to the horizon.
But even if neither the fluid/gravity nor the large-D effective theories reach up to momenta k ∼ D/r 0 , we identify important phenomena on black branes that appear for k ∼ √ D/r 0 . At these scales, the large-D expansion includes all powers of k, while fluid/gravity hydrodynamics can only capture them order by order. This distinction constitutes the main of advantage of the large-D theory as compared to hydrodynamic formulations of black brane dynamics.
The usefulness of the large-D effective theory in capturing finite-wavelength phenomena is shown in this article in the following: • AF black branes, both neutral and charged, suffer from the Gregory-Laflamme (GL) instability [25], and we can follow their evolution at large D until they settle down in a stable non-uniform configuration [26,27]. The latter can be viewed as an elastically deformed membrane. When the p-brane charge is large enough, the instability disappears.
• Reissner-Nordstrom AdS black branes are stable, but we find a phenomenon of charged silence: the frequency of sound modes is purely imaginary at wavelengths below a charge-dependent value, and thus sound propagation is shut off.
Both the existence of static non-uniform black branes and the charged silence phenomenon involve finite wavenumbers k ∼ √ D/r 0 and cannot be seen in hydrodynamics.
In sec. 2 we introduce the notion of hydro-elastic complementarity in the large-D effective theory of neutral black branes. In sec. 3 we derive the effective theory for Reissner-Nordstrom black branes (AF and AdS), i.e., branes with point (Maxwell) charges coupled to a Maxwell field, and develop their fluid and elastic properties. Sec. 4 describes briefly the introduction of an electric field and its polarization effect on the black brane. Sec. 5 gives the large-D effective theory of black p-branes with p-brane charge, coupled to a p + 1-form gauge potential. We briefly conclude in sec. 6. The appendices contain side remarks and some technical steps; in particular, appendix D gives a detailed derivation of the effective equations that follows and extends the approach of [4].
Notation and conventions
• For AF black p-branes we define and for AdS black branes We will often use n as the large parameter instead of D. (1.5) • We will distinguish physical magnitudes from those of the effective theory, which are appropriately rescaled by powers of n, by using boldface for the former, e.g., the energy density ρ = nρ. Appendix A summarizes the relations between them.
• We use units where 16πG = Ω n+1 = area of the unit S n+1 . 3 2 Neutral black brane redux: isothermal fluid, elastic membrane The basic ideas can be most clearly explained by revisiting the effective theory of neutral black branes, either AF or AdS, that was derived in [8].
Let us begin by identifying how to take the large-D limit. This can be inferred from the properties of static, uniform black branes. In Eddington-Finkelstein coordinates, the AF black brane metric is 4 In the latter, the cosmological constant is set to Λ = −n(n − 1)/2. As D → ∞ we will keep p finite in AF, while in AdS we will assume that the metric functions depend on only a finite number of coordinates.
The equation of state relating the energy density ρ = (n + )r n 0 and pressure P of the branes is (using (1.5)) The fact that G ∼ Ωn+1 ∼ n −n/2 → 0 at large n can be related to the vanishing of the gravitational field outside the near-horizon region [2,3]. 4 We denote by t the ingoing null coordinate. When n is large, dependence on t is the same as dependence on the asymptotic time that measures time in the effective theory.
so the speed of sound of long-wavelength perturbations is 5 Since c s is small when n 1, we expect the large-n dynamics to be non-relativistic. This dictates the scalings with n required to capture this physics: we rescale in order to focus on small, O(1/ √ n) lengths along the brane, and in addition consider Thus we take the metric along brane directions to be 6 Ref. [8] sought solutions in Bondi-type gauge for the AF neutral brane 9) and the AdS black brane A convenient way to proceed is to integrate over the sphere Ω n+1 in the AF case, and over the cyclic brane directions in AdS, and obtain theories of gravity in the reduced finite-dimensional spacetimes with a dilaton field for the size of the compactified spaces (see e.g., appendix B of [4]). The reduced AF and AdS theories can be related by analytic continuation in n, so only one of them needs to be explicitly solved [28,29]. One finds that 1/n terms in G ij must be included for consistency of the Einstein equations at this order.
The solution found in [8] has u t = −1, u i = 0 (by gauge choice), and and finite radial coordinate In the AdS solution p i = 0 only along the finite number of non-cyclic brane directions.
Furthermore, the Einstein equations with a radial index imply that the collective fields ρ(t, σ) and p i (t, σ) must satisfy the effective field equations 7 (2.14) Spatial brane indices i, j are raised and lowered with the flat metric δ ij . 5 Imaginary cs for AF black branes corresponds to the GL instability [20,19]. 6 As already mentioned, this scaling is not covered by the analysis of [5,10]. 7 We believe these are the conservation equations of the quasilocal stress-energy tensor at R → ∞ (with appropriate subtraction). However, extracting this stress tensor is subtle [4], so we omit it.
Isothermal fluid
Eqs. (2.13) and (2.14) have the form of continuity equations for ρ and p i , with d p σ ρ and d p σ p i being conserved in time. This suggests that we change the variable p i to v i as Then (2.13) becomes the continuity equation for mass, and (2.14) the momentum-stress equation These are the equations of a non-relativistic, compressible fluid with mass density ρ, velocity v i , and stress tensor τ ij . The first two terms in (2.18) correspond, respectively, to isothermal-gas pressure and to shear and bulk viscosities (recall that δ i i = p is finite for the AF black p-brane, but infinite for the AdS brane). Together with the entropy density and temperature, these properties reproduce the leading large-n results for AF and AdS black branes in the fluid/gravity correspondences of [23,19]. That is, if we take the large-n limit of the stressenergy tensor of the latter, including up to viscosity terms, and scale physical quantities as in appendix A, then we obtain the first two terms in (2.18). Negative P for = +1 gives rise to the Gregory-Laflamme instability [19,20].
The constitutive relation (2.18) contains one last term beyond the viscous stress. In fact, since the large-D expansion and the hydrodynamic gradient expansion are different, one might have expected an infinite number of higher-derivative terms in τ ij . Remarkably, the gradient expansion is truncated at a finite order when D → ∞. This implies that an infinite number of higher-order transport coefficients vanish in this limit [19]. In order to match the last term in (2.18) to the hydrodynamic second-order coefficients computed in [23] one must focus on the regime where both expansions agree. Hence we must not only take the large-n limit of [23], but also regard (2.18) as a perturbative gradient expansion, so that the hydrodynamic equations at first-derivative order can be used to rewrite the second-order term, with gradients of ρ, in terms of velocity gradients.
In the hydrodynamic interpretation, the last term in τ ij is associated to creation or dissipation of density inhomogeneities. But we can also interpret it in other ways. Let us write its divergence as i.e., as a term proportional to the gradient of a potential. In this manner we can view this term as yielding an external gravitational force, proportional to the mass density, acting on the fluid. However, the origin of the gravitational potential from second derivatives of ρ is obscure. Since this rewriting will not be possible for dynamical charged black branes, we shall not dwell anymore on it. In appendix B we discuss another rewriting of the equations that leads to a simple but unusual identification of the fluid properties. Next we turn to a different interpretation.
Hydro-elastic complementarity
The variable ρ sets the horizon size R = R h = ρ in the black brane solutions (2.9), (2.10), and as such it also determines the radial position of the effective membrane in the background geometry. Namely, the 'near-zone' solutions (2.9), (2.10) are matched to either the Minkowski background 24) or to the AdS background at a 'membrane surface' r = r(σ), of the form When n is large, this is which describes small, 1/n deformations of a uniform surface at r = 1. The trace of its extrinsic curvature, K, and the background gravitational potential √ −g tt on it are such that (see app. C) Let us collect the velocity-independent terms in eq. (2.17), and use (2.23) and (2.28) to write it as In addition, if we take the time derivative of (2.28) and use the mass continuity equation which, to leading non-trivial order at large n, is equivalent to with constant κ. It is straightforward to check that the latter is actually the surface gravity of the horizon in (2.9) and (2.10). In terms of the physical velocity v (see app. A) this equation is
Static string solutions
For completeness and later reference, we review here the analysis in [4] of static solutions of (2.32), with v = 0, for which ρ depends on only one spatial coordinate z. We refer to these as static string configurations.
From (2.27) we see that P measures the O(1/n) fluctuation of the radius of the membrane.
Eq. (2.32) (with v = 0) can be integrated twice to obtain (2.37) Here E and ρ 0 are integration constants. We can view this as the classical mechanics of a particle (the undamped Toda oscillator), with position P, time z, potential U , and energy E. When ρ 0 > 0 the potential has an extremum at e P = ρ 0 .
Trajectories of the particle with P = 0 correspond to non-uniform string profiles. The potential is dominated by the linear term P for P > 0, and by ρ 0 e −P for P < 0. Then, when = −1 there cannot be any non-trivial, bounded trajectories of the particle, and the only solutions correspond to constant P at a maximum of U where e P = ρ = ρ 0 > 0.
These are uniform AdS black branes.
When = +1 and ρ 0 > 0 the potential has a minimum where U = 1 + ln ρ 0 . The solution that stays at the minimum is the uniform AF black string, but now periodic trajectories also exist (for E > 1 + ln ρ 0 ), which give non-uniform black string solutions.
Although the equation cannot be integrated exactly, it is easy to obtain analytical approximations and numerical solutions [4], which match very well the profiles that result at the end of the time evolution of the dynamical equations (2.13), (2.14) [8].
Reissner-Nordstrom black branes
We study black brane solutions of with either Λ = 0 or Λ = −n(n − 1)/2. Examination of the properties of the static, uniform black brane solutions of the theory 9 at large n shows that, with the scalings in (2.7), if the charge is to affect the metric at leading order then the gauge potential must be 10 A r can be gauge-fixed to zero. Our metric ansatz is the same as in the neutral case, i.e., (2.9) for AF and (2.10) for AdS.
We can now solve the Einstein equations for this ansatz perturbatively in the 1/n expansion. Direct calculation using computer algebra is a quick and efficient method for getting the solution we give below. However, it is also possible, and at times illuminating, to solve the equations step by step extending the systematic approach of [4] as we explain in appendix D.
Our solution to the Einstein and Maxwell equations is
where the collective fields ρ(t, σ), q(t, σ), p i (t, σ) of the brane must solve the effective field We have defined The horizon is at R = ρ + . We could gauge-transform A t to vanish there.
In principle the extremal limit ρ = √ 2q is outside the range of validity of the approximations involved in the derivation. However, the fact that the limit √ 2q → ρ of the effective equations appears to be a smooth one (in particular in AdS) is suggestive that they may also be applicable at extremality. Nevertheless, a more careful analysis, not done here, is needed to ascertain this point. 11 10 Scaling the gauge field as At = O(1/ √ n) makes it a test field at leading large-n order. The construction of charged black branes by Kaluza-Klein reduction of the neutral branes of sec. 2 results in this theory. 11 Ref. [31] argues that hydrodynamics applies to Reissner-Nordstrom AdS4 black branes even at T = 0, which suggests that the effective large-D theory may have a continuous extremal limit.
Dynamical fluid
Since eq. (3.6) is the same as the neutral equation (2.13), we change again to velocity variables (2.15). In addition to the mass-continuity equation (2.16), eqs. (3.7) and (3.8) take the form of charge continuity with charge current 11) and the momentum conservation equation (2.17) with stress tensor This is a non-relativistic, compressible fluid, with pressure shear and bulk viscosities 14) and a conserved particle number ('baryon charge') with density q. The current (3.11) has a diffusive term, proportional to the gradient of q/ρ. If ρ is uniform then (3.10) is the conventional diffusion equation for q, with diffusion constant equal to one, which drives the charge density to smoothen uniformly. More generally, when ρ is not uniform the charge density q diffuses to become proportional to ρ. By looking at the equations for a test Maxwell field on the large-D black brane, one sees that this charge diffusion on a horizon is essentially the same as in the membrane paradigm [21].
As we discussed in sec. 2, the truncation of the hydrodynamic expansion to a finite order in gradients is a remarkable consequence of the large-n limit. Up to viscous, firstorder gradients, the hydrodynamic theory that we have found for Reissner-Nordstrom AF black branes can be recovered as the large-n (non-relativistic) limit of the first-order hydrodynamics derived for them in [24,32]. However, the last term in τ ij is new. Unlike in neutral branes, it cannot in general be written as a gravitational force, but as we will see next it is crucial for the elastic interpretation.
Elastic membrane
Eq. (2.30) for the time derivative of the extrinsic curvature was obtained using only the mass continuity equation (2.16) and therefore applies also for RN black branes. However, the spatial derivative of the extrinsic curvature does not seem to take, in the general timedependent case, as simple a form as (2.29). We will then assume that the system is in a configuration of stationary equilibrium.
Under the evolution equation (3.10), the charge density diffuses until an equilibrium state is reached with and hence In addition, when the configuration is time independent and with Killing velocity vector, When = +1 we can use (2.23), (2.28) and (3.16) to write this equation as When v = 0 this is equivalent to where κ is the surface gravity of the black brane. The equality holds at O(n) and O(1).
When v = 0, and in AdS, K is not simply proportional to a redshifted form of the surface gravity of the black brane. Effective theory magnitudes like K are measured at the membrane location in the 'overlap zone' at R → ∞, while κ is measured at the horizon.
The relation between the two, as observed in ref. [4], is simple for neutral black holes in e.g., Minkowski backgrounds and/or leading large-n order, but not in general.
so that we recover the stationary equation in the form and then the stationary equation can be written as whereK is the trace of the extrinsic curvature of the embedding in the background metric with brane coordinatesσ i .
Since static charged black branes satisfy the same equation of constant mean curvature as neutral ones, the non-uniform static black string solutions discussed in sec. 2.3 give also the profiles for charged black strings.
Thermodynamics and the entropy law
For the charged black brane, in addition to the energy density and charge density we have other local variables, namely the entropy density, temperature, and chemical potential, These are obtained in the conventional manner from the black brane solution, and have been renormalized by appropriate factors of n to render them finite in the effective theory (app. A). As such, they satisfy the generic (non-relativistic) thermodynamic equations as well as black-hole-specific relations such as η/s = 1/(4π).
Using these expressions we can show that and then write the charge current (3.11) in the canonical form with coefficient A conventional argument now shows that the second law is satisfied in this system. Namely, the continuity equations for ρ and q, (2.16) and (3.10), together with (3.26), imply that Then we identify the entropy current and since the right-hand side of (3.30) is manifestly non-negative, it follows that the total entropy cannot decrease, Observe that charge diffusion generates entropy, but viscosity does not: its production of entropy is suppressed by a factor 1/n.
When the charge is small, so that T s ρ, the diffusion coefficient (3.29) reproduces the value κ q = η/(4π) associated to the resistivity of neutral black holes in the membrane paradigm [21].
Quasinormal modes: Gregory-Laflamme instability (AF) and charged silence (AdS)
Introduce a small perturbation of the static uniform state, and solve to linear order in the perturbation. Defining the constants we find three different kinds of modes: Charge diffusion mode. The perturbation has δq/δρ = q 0 /ρ 0 and the frequency is This is the expected purely dissipative mode for charge diffusion.
Shear mode. The frequency is Since a + ≤ 1, we find that charge diffuses more quickly than shear.
Sound modes. These have δq/δρ = q 0 /ρ 0 (so q/ρ remains constant and there is no charge diffusion) and frequency The properties of sound differ markedly depending on whether the brane is AF or AdS: AF branes: GL instability. When = +1, whenever k < k GL = 1 the +-mode is an unstable mode with real and positive growth rate This is the GL instability at large D. The threshold wavenumber k GL is the same independently of charge, and for all 0 < k < k GL the rate Ω decreases for larger q 0 /ρ 0 . Hence the presence of charge makes the instability weaker.
AdS branes: charged silence. When = −1 the sound mode frequencies are At long wavelengths sound propagates with the expected speed c s = 1, i.e., 1/ √ n in physical units. As k increases the effective sound speed is reduced due to viscous damping, until a critical wavenumber is crossed, beyond which the two sound modes are still stable but their frequency is purely imaginary so sound does not propagate: the brane is silenced at short scales as a consequence of charge. In proper physical scales the critical value is (in units r 0 = 1), which is a relatively large momentum, but still below the proper scale of momenta n(a + − a − ) at which the 1/n expansion ceases to be applicable.
One might suspect that the effect could be present only when n → ∞, so that at finite n sound at short scales could be attenuated but not completely silenced. However, the phenomenon persists at the next order in 1/n. We have computed the 1/n correction to quasinormal sound frequencies, with ω ± the values in (3.41). We find (we omit details of the calculation). This correction diverges at k = k c , so we should not trust the result there, but at other momenta it should be reliable as long as |ω (1) ± | < n. Under this condition, when k > k c both frequencies ω ± are purely imaginary. 12 So charged silence may be present at finite n. However, the possibility must be kept in mind that non-perturbative effects in 1/n may change this conclusion. Unfortunately, there are no independent calculations at finite n that could decide this issue. Ref. [33] computed numerically the quasinormal sound frequency for non-extremal RN-AdS 4 , i.e., n = 3, and reached a lowest temperature of T = 0.0109µ, for which the critical momentum (3.43) is k c /µ = na + /2a 3 − = 2.6. 13 This is much larger than the largest momentum, k/µ = 0.25, reached by the quasinormal mode calculation in [33], which did not see the effect.
In the effective theory the phenomenon originates in the damping of sound by viscosity, which given the truncation of the hydrodynamic expansion at a low gradient order, is not inhibited by other stronger short-distance physics on the brane.
There is some similarity between this phenomenon and the disappearance of zerosound at the transition between the collisionless regime and the hydrodynamic regime, first examined holographically in [34]. However, zero-sound disappears as a result of increasing thermal excitations, while in our case hydrodynamic sound is silenced as charge is added and hence the temperature decreases, so a different microscopic mechanism would seem to be behind it.
We have also found that the same effect is present, also including 1/D corrections, for spherical RN-AdS black holes: the lowest gravitational scalar quasinormal modes become purely damped at finite charge for sufficiently large partial-wave number .
Non-linear evolution
It is straightforward to solve numerically the effective equations (3.6), (3.7), (3.8), extending the study in [8]. The results are qualitatively the same: thin-enough AF charged black branes are unstable, and they evolve to settle down into static stable non-uniform configurations with constant K. Uniform AdS black branes are stable.
Polarized branes
We now want to consider the effect of an external electric field parallel to the brane. We find that we can include it easily when the gauge field is A t = O(1/ √ n). This implies that the electric charge of the black brane is a test-charge, i.e., it does not affect the 12 Additionally, Re ω± change sign in a narrow interval of k just below kc. However, n may need to be very large for the 1/n expansion to be reliable there. 13 With our normalization (3.1) for the gauge field, µ is twice the one in [33].
geometry. Nevertheless, the external field does have an effect on the mass, charge and momentum densities on the brane. In particular, the field can polarize the brane, creating a non-trivial distribution of charge even in cases where the total charge vanishes.
With this scaling of the field, we find that the leading order metric takes the same form as in the neutral solution, while the gauge field is where V (t, σ) is the external electric potential. Besides the mass continuity equation (2.16), the effective equations are with the electric current (3.11) and the stress tensor of the neutral black brane (2.18).
The electric field ∝ ∂ i V exerts a force on the charge on the fluid but it also effectively contributes to the current, polarizing it to equilibrium configurations with non-uniform In a space with σ i periodically identified we can introduce a non-trivial periodic potential V , constant in time, and easily solve these equations numerically. For e.g., a brane initially uniform and with zero total charge, the polarizing effects in AdS show up easily, creating a stable, periodic non-uniform distribution of mass and charge that follow the external field.
In AF branes there is a competition between the explicit breaking of translational symmetry caused by a non-uniform polarizing field, and the spontaneous breaking due to the GL instability. Numerical study of the evolution under these competing effects shows that when the polarizing field is small, the final stable static black brane is almost unaffected by it, but when the polarizing field is large it can completely dominate the final state.
In the presence of a polarizing field, static solutions ρ(z) are given by an analysis similar to sec. 2.3. For static configurations we set with constant V 0 . Integrating twice, and assuming periodicity along z, we get the equation with U as in (2.37).
Brane-charged black branes
Now we study p-brane solutions of the action
Choice of large-n limit
In order to get oriented about how to take the large-n limit, we study first the static uniform solutions. At any finite n, refs. [30,35] give their energy density, pressure, pbrane charge, potential, temperature and entropy density, in terms of two parameters r 0 and α (r 0 ≥ 0, |α| < ∞), in the form where we have defined The speed of sound is [35] Let us now take the large-n limit with α fixed, so q p /ρ and P /ρ remain O(1). Then the speed of sound is always positive, i.e., we never expect a GL instability, even though the p-brane is in general non-extremal. Moreover, we have c s = O(1) instead of O(1/ √ n), so the system is relativistic. While there is nothing wrong with this limit, it is a regime of brane physics different than we are studying in this paper.
A different large-n limit is obtained for small charge q p /ρ = O(1/ √ n), i.e., set α =α √ n (5.9) and keepα finite. Then is non-relativistic at large n, and can change sign if the charge is large enough. Therefore, scaling the metric as in (2.8) and the gauge potential we expect to capture the physics of hydrodynamic sound and the appearance/disappearance of the GL instability.
Large D effective theory
Following these arguments, we take the ansatz (2.9) for the metric and We find the solution where now q p is constant and we set u t = −1, u i = constant. We may also gauge-transform B to make it vanish at the horizon R = ρ. The effective field equations for ρ and p i are
Fluid dynamics
The change (2.15) casts these equations into explicitly hydrodynamic form. Besides the usual mass continuity equation (2.16) from (5.15), eq. (5.16) gives The only change in the effective fluid relative to the neutral one is in the pressure, This is indeed the large-n limit of the pressure of the black brane discussed above in sec. 5.1, with the appropriate translation between physical and effective magnitudes (app. A). The charge q p is a global parameter of the fluid only affecting its pressure, and not a local degree of freedom. Nevertheless, we can associate to it a 'local potential' [35] Since in the large-n limit as we have taken it, the entropy density and temperature of the effective theory are the same as in the neutral fluid (2.21), q p and Φ p do not enter the first and second law of thermodynamics, and the entropy is conserved.
Elastic interpretation
Eq. 2.29 (with g tt = −1) applies again to this system and gives ∂ t K, while eq. (5.17) can be written in the hydro-elastic form with the extrinsic curvature of the brane embedding in Minkowski being (C.7). For stationary branes we can write which expresses how the surface gravity κ remains constant over the entire horizon of the non-uniform, locally boosted brane.
Sound mode and GL instability
The shear mode is the same as in the absence of charge. Sound modes have frequency When q p < ρ the frequency ω + presents a GL instability for wavenumber k smaller than that is, if e.g., a string has length L < 2π/k GL , in units where the horizon radius is r 0 = 1, then it is linearly stable. Note that q p = ρ is not an extremal limit, which instead corresponds to √ N q p /ρ = 1. Since we are taking q p /ρ = O(1/ √ n) we are always far below this limit. That is, the regime we can access of ρ < q p √ nρ is one of black branes with regular, non-extremal horizons, but stable ones.
Numerical evolution of the non-linear equations (5.15), (5.16) confirms that small perturbations of uniform black branes with q p < ρ grow and evolve until a static nonuniform solution is reached, while if q p > ρ the brane reverts back to the uniform state.
Static string solutions
The analysis of sec. 2.3 yields in this case the mechanics of a particle in the potential with constant ρ 0 . Now in order to have a minimum of the potential (where e P = ρ 0 ) we need ρ 0 > q p , which means that we are in the range where the uniform string is unstable and tends to develop non-uniformities. In this case, the competition between the terms +e −P and −e −2P makes the oscillations of P take a longer time near its smallest values -that is, the neck of the non-uniform string, where it is thinner, becomes longer as q p grows. Stationary black holes are already known to be described by an effective elasticity theory [4,6], but the complementary view in terms of a simple dynamical theory of continuous mass distributions is absent. It would be interesting if the more general time-dependent effective theory of [5,10] could be reinterpreted in that manner.
Acknowledgements
We are grateful to Simon Gentle, Christiana Pantelidou and Javier Tarrío For Maxwell-charged RN black branes we are taking q/ρ = O(1), whereas for p-brane Now all the equations have got simple diffusive terms in the right-hand side, all with the same diffusion coefficient, while the stress term ∝ ∂ i ∂ j ln ρ has disappeared.
When q = 0 there is an apparent negative viscosityη = −ρ − which would seem to yield anti-dissipation, but this effect is offset by the extra diffusive terms. Another peculiarity is that configurations that are static in the Landau-frame are stationary with non-zero velocityv i = ∂ i ln ρ in the diffusive frame. Whether the existence of this frame for black branes is particularly useful or significant is at present unclear.
C Extrinsic curvature of membrane embeddings
Let us calculate the area A and volume V of the surface (2.27). In order to ease a bit the notation we use the variable P(σ) = ln (ρ(σ)) .
(C.1)
When n is large, in the Minkowski background (2.24) we have and in the AdS background (2.25), We compute the trace of the extrinsic curvature K by functional differentiation .
In Minkowski we find and in AdS In AdS there is a non-trivial gravitational redshift on the surface, namely With these results we obtain which is equivalent to (2.28).
D Systematic derivation of the effective equations
Here we derive the effective fluid equations (3.6-3.8) by hand in a systematic way, without recourse to computer algebra. The approach can be regarded as an extension of the derivation that [4] did for the static case.
D.1 ADM-type formulation
We adopt an ADM-type formulation, in which we decompose the spacetime into the radial direction and the rest, In the large D limit the radial gradients are O(D), hence, The Einstein equation is decomposed into the evolution equation, and constraint equations, where K µ ν is the extrinsic curvature on a constant-ρ surface, E, J µ and S µν are different radial components of the stress tensor, where nμ∂μ := N −1 (∂ρ−N µ ∂ µ ) is the normal vector and ⊥μν:=ḡμν −nμnν is the projection tensor.
We also decompose the Maxwell equations. We define the (D−1)-dimensional 'electric' and 'magnetic' tensors, 15 For convenience we also define the D-field and H-field 16 Greek indices are raised and lowered with g µν . The Maxwell equations are decomposed as The first two equations give the evolution in theρ-direction, while the last two are constraints. One can calculate the energy momentum tensor, E, J µ and S µν , of the Maxwell field,
D.2 Solving the equations
We first solve the equation for the mean curvature, that is the trace of eq. (D.3), The contribution from the Maxwell field is (D.18) 15 Of course these are not the electric and magnetic fields in the usual sense. 16 In GR-MHD the Maxwell field is decomposed in a similar manner.
Supposing that the field strength is the order unity, i.e. E µ ∼ O(1) and B µν ∼ O(1), the Maxwell field does not appear in the leading order of eq. (D.17). 17 The situation is similar for the q-form field and the massless/massive scalar. 18 Therefore, like in the neutral case, we can solve eq. (D.17) for the leading order of K, where we set the lapse N = r 0 (x)/n by a choice of gauge. Now, we introduce an ansatz that represents the deformed black p-brane, where ω IJ is the metric of S n+1 sphere and R 0 is assumed to be constant. The p + 1 metric G ab is given by whereũ i = γ ij u j and u 2 =ũ i u i . Here, the orders of metric functions are assumed to be to capture (at a non-linear level) the physics of the lowest quasinormal modes of the black brane -these are responsible for the hydrodynamic behavior and the Gregory-Laflamme instability. The shift vector is chosen so that the whole spacetime takes the form of an ingoing-Eddington-Finkelstein metric, In this gauge, the boundary condition on the apparent horizon is just the regularity of each metric component.
With the metric ansatz (D.20), the leading order of eq. (D.17) becomes and thus, r 2 0 is a constant. Since the other components of Einstein equation have contributions from the Maxwell field at leading order, next we must solve part of the leading-order Maxwell equations. We assume that the field strength has the following order at large D, 19 (D.25) 17 If some directions are magnified as x µ → √ Dx ν , then we should assume another scaling, like Eµ ∼ O(D −1 ). 18 A large scalar mass as O(D 2 ) will change the situation. 19 One can obtain this scaling by setting the gauge Aρ = 0 and assuming At ∼ O(1), Ai ∼ O(n −1 ).
These assumptions lead to The leading order of eq. (D.11) can now be solved as where the coefficients are chosen for later convenience. From the definition of D µ (D.9), we have where q i := δ ij q j and u i := δ ij u j . Substituting eq. (D.28) into the first constraint in eq. (D.13), we can obtain the effective equation for the charge density where q i is determined by the regularity of E i , after a part of the leading geometry and B ti are solved. Then, we can write the contributions of Maxwell field in Einstein equation with q and q i , With the asymptotic condition φ ρ and the regularity at the horizonρ = 0, the solution becomes φ ln m + ln (1 − χ 2 ) cosh 2 (ρ/2) + χ 2 (D. 34) where χ is an integration function and q/( √ 2χ)(=: m) denotes the deformation of the horizon area density.
D.3 Embedding condition
We assume r 0 = 1 after setting R 0 → ∞ for the AF case and L → ∞ for the AdS case. | 10,640 | 2016-02-18T00:00:00.000 | [
"Physics"
] |
Crossing Probabilities of Multiple Ising Interfaces
We prove that in the scaling limit, the crossing probabilities of multiple interfaces in the critical planar Ising model with alternating boundary conditions are conformally invariant expressions given by the pure partition functions of multiple SLE(\kappa) with \kappa=3. In particular, this identifies the scaling limits with ratios of specific correlation functions of conformal field theory.
Introduction
The Ising model is arguably one of the most studied lattice models in statistical physics. It was introduced in the 1920s by W. Lenz as a statistical model of two types of spins (⊕, ) defined on a lattice (e.g., Z d ), describing magnetic material. This model was further studied by Lenz's student, E. Ising, who proved that there is no phase transition in dimension one, leading him to conjecture that this is the case also in higher dimensions. However, as R. Peierls showed in 1936, in two (and higher) dimensions an order-disorder phase transition in fact occurs at some critical temperature, identified soon thereafter by a duality argument in 1941 by H. Kramers and G. Wannier (and later rigorously proven by L. Onsager, 1944). During the next decades, renormalization group arguments and the introduction of conformal field theory (CFT) suggested that, due to its (continuous, i.e., second-order) phase transition, at the critical temperature the planar Ising model should enjoy conformal invariance in the scaling limit as the lattice mesh tends to zero [BPZ84,Car96]. Ever since, there has been active research pertaining to understanding the planar Ising model at criticality, with recent success towards proving the conformal invariance via methods of discrete complex analysis [Smi06, Smi10, CS12, CI13, HK13, HS13, CDCH + 14, CHI15,CHI21].
From the physics point of view, the scaling limit of the critical Ising model should be described by some CFT. Not only such a statement lacks mathematical rigor, but also the limiting object(s) may not even be well-defined. Interestingly, the scaling limit of the random field formed by the spin variables can be described -not as a random function, but -as a random distribution [CGN15,CHI15]. However, e.g. the energy density only has a rigorous description in the scaling limit at the level of its correlation functions [HS13], and it seems unclear, if even possible, to interpret it probabilistically as a random field.
Important geometric information about the model is encoded in its crossing probabilities. The goal of the present article is to identify the scaling limits of boundary-to-boundary crossing probabilities in the critical planar Ising model as (ratios of) specific correlation functions of CFT (namely those of a so-called degenerate field on the boundary with conformal weight h 1,2 = 1/2 in a CFT of central charge c = 1/2). We emphasize that, although such a CFT field has not been mathematically defined, its correlation functions can be understood as functions of several complex or real variables, uniquely determined as solutions to certain PDE boundary value problems. The appropriate partial differential equations (given Here, the first marked point on the left corresponds to the lower corner of the rotated square in Figure 1.1, and we follow the marked points counterclockwise. We call the labels α ∈ LP N "link patterns". in Equation (1.2)) are a special case of PDEs known in the physics literature as "level-two BPZ" or "null-field" equations [BPZ84]. They are hypoelliptic equations with singularities on the diagonals (that is, when variables collide). These singularities can be used to determine, in a sense, boundary behavior of the solutions of interest: by imposing specific asymptotic properties motivated by CFT fusion rules -see [BBK05,Section 8] for heuristic derivations -one can single out functions that describe the crossing probabilities in the scaling limit (see Theorem 1.1 for the precise statement). Interestingly, these correlation functions are also reminiscent of those of the free fermion field, or energy density, on the boundary, cf. [ID89, DFMS97, BBK05, Hon10, HS13, FSKZ17, Pel19].
We shall next give the precise statements of our result. We consider the critical Ising model on a δ-scaled square lattice approximation Ω δ of a simply connected subdomain Ω of the plane. One could, in principle, consider much more general graphs (e.g., isoradial [CS11,CS12]) and thus also address universality for the Ising model: its scaling limit should not depend on the microscopic details of the approximation scheme. This would be, however, highly technical, so we stick to the simplest setup.
We impose alternating boundary conditions: we divide the boundary into 2N segments, N of which have spin ⊕ and N spin , and the different segments alternate -see Equation (5.1) and Section 5 for details. With such boundary conditions, N random macroscopic interfaces connect pairwise the 2N marked boundary points, as illustrated in Figure 1.1. Supplementing the celebrated results of [CDCH + 14] in the case of N = 1, it was proved in [Izy17] that "locally" (i.e., up to a stopping time for the growth of the curve), these interfaces converge (weakly in terms of probability measures on curves) in the scaling limit δ → 0 to a multiple Schramm-Loewner evolution process (a version of N -SLE κ with κ = 3).
The In this article, we are interested in the crossing probabilities of these interfaces as functions of the domain and the marked boundary points. Our main result, Theorem 1.1, identifies the scaling limits of these crossing probabilities in terms of so-called partition functions of multiple SLE 3 curves. In particular, the limit crossing probabilities are conformally invariant and match with their CFT predictions -see [BBK05,Section 8.3] for a detailed discussion.
Update. In addition, as pointed out recently by A. Karrila [Kar18], for the convergence near the marked points we need a slightly stronger notion, close-Carathéodory convergence, discussed in Section 5.1. Namely, the Carathéodory convergence allows wild behavior of the boundary approximations, while in order to obtain precompactness of the random interfaces as δ → 0, a slightly stronger convergence which guarantees good approximations around the marked boundary points is required.
Theorem 1.1. Let (Ω; x 1 , . . . , x 2N ) be a bounded polygon with N ≥ 1, and suppose that discrete polygons (Ω δ ; x δ 1 , . . . , x δ 2N ) on δZ 2 converge to (Ω; x 1 , . . . , x 2N ) as δ → 0 in the close-Carathéodory sense. Consider the critical Ising model on Ω δ with alternating boundary conditions. Denote by ϑ δ the random connectivity pattern in LP N formed by the N discrete interfaces with law P δ . Then, we have , for all α ∈ LP N , where Z (N ) and {Z α : α ∈ LP N } is the collection of functions uniquely determined as the solution to the PDE boundary value problem given in Definition 2.1, also known as pure partition functions of multiple SLE κ with κ = 3.
In the first nontrivial case of N = 2, the crossing formula (1.1) in Theorem 1.1 was predicted by L.-P. Arguin and Y. Saint-Aubin [ASA02]: the pure partition functions in this case are given by [BBK05,Izy15]. In general, explicit formulas for the probability amplitudes Z α are not known, while the total partition function Z Ising does have an explicit Pfaffian formula, already well-known in the physics literature (as a correlation function of the free fermion field, or energy density) and appearing, e.g., in [KP16,Izy17,PW19] in the SLE context. The functions Z α are defined as appropriately chosen solutions to the PDE system (1.2) stated below, which appears in the CFT literature as a "null-field", or "BPZ" equation [BPZ84,ID89,DFMS97,BBK05]. From the SLE point of view, the functions Z α are total masses of "pure" multiple SLE measures, see [KP16,PW19].
Knowing that the Ising interfaces converge (at least in a local sense, cf. Section 5.2 and [Izy17]) to SLE 3 type curves, the gist of our proof for Theorem 1.1 is a relatively standard SLE martingale argument. Indeed, due to the PDEs (1.2), the ratio Z α /Z Ising gives rise to a martingale with respect to exploring one of the chordal Ising interfaces (see Equation (5.2)). By investigating the terminal value of this martingale, an induction argument proves (1.1). The subtleties in the proof are related to the analysis of the martingale as well as of the SLE 3 type curve as it approaches one of the marked boundary points.
• PDEs: The probability amplitude Z α in (1.1) satisfies a system of 2N partial differential equations, which in the upper-half plane H = {z ∈ C : Im(z) > 0} are given by (1.2) Let us also remark that our results imply that the scaling limit of the Ising interfaces depicted in Figure 1.1 is the "global" multiple SLE 3 process whose law is given by the sum of the extremal multiple SLE 3 probability measures associated to the various possible connectivity patterns α of the interfaces (cf. [Izy17,BPW21]). To avoid introducing more definitions in the present article, for this we only refer the reader to the literature for details, discussed, e.g., in recent works by A. Karrila [Kar19,Kar20].
Analogues of Theorem 1.1 should also hold for other critical planar statistical mechanics models (with other κ > 0). In the appendices, we discuss the following known examples, whose boundary conditions are symmetric under cyclic permutations of the marked boundary points (i.e., rotationally symmetric): • Gaussian free field, whose level lines are SLE κ type curves with κ = 4 (see [SS13]); • chordal loop-erased random walks, which converge in the scaling limit to SLE κ type curves with κ = 2 (see [Sch00, LSW04, Zha08, Kar19] for various setups).
Both of these examples are exactly solvable: explicit formulas for connection probabilities for looperased random walks and level lines of the Gaussian free field (as well as for crossing probabilities in the double-dimer model), were found by R. Kenyon and D. Wilson in [KW11] and further related to SLEs in [PW19,KKP20]. Other lattice models seem "less exactly solvable" -formulas for discrete crossing probabilities have not been found, and in the scaling limit only certain very special cases are known. In critical percolation, the case of N = 2 is given by Cardy's formula [Car92,Smi01] and the case of N = 3 was solved by J. Dubédat [Dub06,Section 4.4]. For general N , crossing probabilities in critical percolation have not been exactly solved even in the scaling limit, but nevertheless, they do admit a characterization by multiple SLE κ partition functions with κ = 6, (cf. [FSKZ17,LPW22]). Recently in [LPW21], explicit scaling limits of crossing formulas for Peano curves tracking frontiers of uniform spanning trees were found. These curves are described by SLE κ type curves with κ = 8 (dual to loop-erased random walks).
In the random-cluster representation of the Ising model (i.e., FK-Ising model), each connection probability describes a natural percolation event, but the alternating boundary conditions in this model are not rotationally symmetric: flipping each wired boundary arc to a free boundary arc and vice versa changes the connection probabilities in the model -this is due to the fact that such a flip is not a global symmetry of the FK-Ising model, but rather a duality. This in particular implies that the total partition function Z FK expanded as a linear combination of the pure partition functions Z α with κ = 16/3 does not have the form of Equation (3.11), but rather Z FK = α c α Z α , where c α > 0 are non-trivial coefficients. These coefficients can be solved explicitly: they are given by entries of the so-called meander matrix [FPW22] (see also [FSKZ17]). For example, with four marked points of cross-ratio For the FK-Ising model, K. Izyurov showed in [Izy15] that at criticality, probabilities of certain unions of connection events have conformally invariant scaling limits, expressed by quadratic irrational functions. He later clarified and improved these results in [Izy20] while still being unable to find explicit expressions for general connection probabilities. (In fact, predictions did appear in the physics literature [FSKZ17].) The general case is completely solved in the recent work [FPW22], where also an analogue of Theorem 1.1 is proven for the critical FK-Ising model. Also more general boundary conditions are treated in [FPW22].
Outline. The article is organized as follows. Section 2 is devoted to preliminaries: we briefly discuss SLEs, their basic properties, and define the multiple SLE (pure) partition functions. In the next Section 3, we focus on the case of κ = 3 and prove crucial results concerning the multiple SLE 3 partition function Z Ising . Section 4 consists of the analysis of the Loewner chain associated to this partition function, leading to Theorem 4.1: the Loewner chain is indeed generated by a transient curve. This is one of the main difficulties in the proof, and relies on fine properties of the partition function Z Ising from Section 3. In Section 5, we first briefly discuss the Ising model and some existing results on the convergence of Ising interfaces, and then prove the main result of this article, Theorem 1.1. For the proof, we need the following inputs. First, we use the convergence of multiple Ising interfaces in a local sense from [Izy17]. Second, we need the continuity of the scaling limit curves up to and including the swallowing time of the marked points, which follows by standard Russo-Seymour-Welsh estimates (cf. [CDCH16]) and Aizenman-Burchard & Kemppainen-Smirnov theory [AB99,KS17]. Third, a crucial technical ingredient to the proof is the continuity of the Loewner chain associated to the multiple SLE 3 partition function Z Ising up to and including the swallowing time of the marked points, that we establish in Theorem 4.1. To finish the proof of Theorem 1.1, we combine all of these inputs in Section 5.3 with detailed analysis of the martingale given by the ratio Z α /Z Ising , relying on properties of these partition functions from Section 3.
Lastly, Appendices A and B briefly summarize results similar to Theorem 1.1 for Gaussian free field and loop-erased random walks, respectively. The results from Appendix A are needed to prove Theorem 1.1, while Appendix B only serves as another example case of a rotationally symmetric model where analogous results hold (for proofs, see [Kar20,KKP20]).
Partition Functions of Multiple SLEs
In this section, we briefly discuss Schramm-Loewner evolutions (SLE) and their partition functions. For more background, the reader may consult [Sch00,Law05,RS05,Law09], for instance.
Recall that by a polygon (Ω; x 1 , . . . , x 2N ) we refer to a simply connected domain Ω C such that ∂Ω is locally connected and x 1 , . . . , x 2N ∈ ∂Ω are distinct boundary points in counterclockwise order along ∂Ω. When N = 1, we also call (Ω; x 1 , x 2 ) a Dobrushin domain. We say that U ⊂ Ω is a sub-polygon of Ω if U is simply connected and U and Ω agree in neighborhoods of x 1 , . . . , x 2N , and in the case of N = 1, we also call (U ; x 1 , x 2 ) a Dobrushin subdomain. Finally, we say that a polygon (Ω; x 1 , . . . , x 2N ) is nice if its boundary ∂Ω is C 1+ -regular for some > 0 in neighborhoods of x 1 , . . . , x 2N . An example of a nice polygon is the upper half-plane H = {z ∈ C : Im(z) > 0} with any boundary points x 1 < · · · < x 2N .
Planar curves are continuous mappings from [0, 1] to C modulo reparameterization. For a simply connected domain Ω C, we will consider curves in Ω. For definiteness, we map Ω onto the unit disc U = {z ∈ C : |z| < 1}: for this we shall fix any conformal map Φ from Ω onto U. Then, we endow the curves with the metric dist(γ 1 , γ 2 ) := inf where the infimum is taken over all increasing homeomorphisms ψ 1 , ψ 2 : [0, 1] → [0, 1]. The space of continuous curves on C modulo reparameterizations then becomes a complete separable metric space. While the metric (2.1) depends on the choice of the conformal map Φ, the induced topology does not depend on this choice. This topology is important in Section 5.2 for convergence of discrete interfaces.
Schramm-Loewner Evolutions
For κ ≥ 0, the (chordal) Schramm-Loewner evolution, SLE κ , can be thought of as a family of probability measures P(Ω; x, y) on curves, indexed by Dobrushin domains (Ω; x, y). Each measure P(Ω; x, y) is supported on continuous unparameterized curves in Ω from x to y. It will be convenient to choose a parameterized representative γ : [0, ∞) → C for each such curve, requiring that γ(0) = x and γ(t) → y as t → ∞. By re-scaling time, we regard γ as a representative of an element in the above curve space. Explicitly, the SLE κ curves can be generated using random Loewner evolutions. Consider a family of maps (g t , t ≥ 0) obtained by solving the Loewner differential equation: for each z ∈ H, where (W t , t ≥ 0) is a real-valued continuous driving function. By general ODE theory, for each z ∈ H this initial value problem has a unique solution (g t (z), 0 ≤ t < τ z ) with maximal lifetime the swallowing time of z. Set K t := {z ∈ H : τ z ≤ t}. Then, g t : H \ K t → H is the unique conformal map (biholomorphic function) normalized in such a way that |g t (z) − z| → 0 as z → ∞. We call the growing sets (K t , t ≥ 0) associated with these maps a Loewner chain. Note that g t is also well-defined on R \ K t , and thus, the swallowing time τ z can be defined for each z ∈ H. Now, the chordal SLE κ in (H; 0, ∞) is first defined as the random Loewner chain (K t , t ≥ 0) driven by W t = √ κB t , where (B t , t ≥ 0) is the standard Brownian motion. This Loewner chain [RS05] is almost surely generated by a continuous transient curve η : [0, ∞) → H in the sense that for each t, the domain H \ K t is the unbounded component of H \ η[0, t], and we have η(0) = 0 and |η(t)| → ∞ as t → ∞. We regard η as a random curve in (H; 0, ∞) and denote its probability measure by P(H; 0, ∞). It is known [RS05] that when κ ∈ (0, 4] (for instance, κ = 3 for the Ising interfaces), almost surely, the SLE κ curve η is simple and K t = η[0, t] for all t. Thus, we may also refer to (η(t), t ≥ 0) itself as a Loewner chain. (When κ > 4, SLE κ curves are not simple and K t also includes regions swallowed by the curve.) For any Dobrushin domain (Ω; x, y), we extend the definition of the SLE κ via conformal invariance: given any conformal map ϕ : H → Ω such that ϕ(0) = x and ϕ(∞) = y, the law P(Ω; x, y) of the SLE κ curveη in Ω from x to y is the pushforward by ϕ of the law P(H; 0, ∞) of η = ϕ −1 (η). Because the law P(H; 0, ∞) is scale-invariant (by Brownian scaling), the law P(Ω; x, y) is independent of the choice of ϕ.
The motivation for O. Schramm to introduce SLEs in his celebrated work [Sch00] was indeed their conformal invariance -crucial for the description of critical interfaces in many statistical mechanics models. The SLE κ curves have another important feature, domain Markov property: if τ is a stopping time for the growing SLE κ curve η ∼ P(Ω; x, y), then, given an initial segment η[0, τ ], the conditional law of the remaining piece η[τ, ∞) is P(Ω τ ; η(τ ), y), where Ω τ is the unbounded component of the remaining domain Ω \ η[0, τ ] containing the target point y on its boundary. The domain Markov property lies at the heart of many martingale arguments applicable to problems involving SLE curves, such as Theorem 1.1.
Partition Functions of Multiple SLEs
Next, we discuss the crossing probability amplitudes Z α in Theorem 1.1. We frequently use the following parameters (mostly focusing on the case of κ = 3, with h = 1/2): The functions Z α are examples of multiple SLE κ partition functions. For H, these are defined as positive smooth functions Z of 2N real variables x 1 < · · · < x 2N satisfying the following two properties: (PDE) Partial differential equations of second order : We have (2.2) (COV) Möbius covariance: For all Möbius maps ϕ : H → H such that ϕ(x 1 ) < · · · < ϕ(x 2N ), we have Multiple SLE κ partition functions have been studied in many works, e.g., [BBK05, Dub06, Dub07, KL07, FK15b, KP16, PW19, Wu20]. They give rise to SLE κ variants known as multiple SLEs (or N -SLE κ processes), which are also termed "commuting SLEs" due to J. Dubédat [Dub07]. For the purposes of the present article, we shall only need the following description of the marginal law of one curve.
If Z is such a partition function, then for each j ∈ {1, . . . , 2N }, a Loewner chain associated to Z with launching points (x 1 , . . . , x 2N ) and starting from x j is defined as the Loewner chain growing from x j with spectator points (x 1 , . . . , x j−1 , x j+1 , . . . , x 2N ) whose driving function W t satisfies the SDEs and This process is well-defined up to the first time τ x j−1 ∧ τ x j+1 when either x j−1 or x j+1 is swallowed. Note that V i t is the time-evolution of the spectator point x i , which coincides with g t (x i ) for t smaller than the swallowing time of x i . From the PDEs (2.2) and Itô calculus, we know that the following process is a local martingale with respect to the growth of the SLE κ curve in (H; x j , ∞): Moreover, for any stopping time τ for which M t∧τ (Z) is a martingale, the law of the Loewner chain associated to Z starting from x j is the same as P(H; x j , ∞) weighted by M t∧τ (Z)/M 0 (Z) by Girsanov theorem, which introduces the drift in (2.4). We say that the measure has been tilted by the local martingale M (Z). See [BBK05, Dub07, KL07, Law09] for more details.
Amongst solutions to (2.2-2.3), the pure partition functions Z α will be singled out by the following asymptotics property (that serves as a boundary condition for the PDE system (2.2)): (ASY) Asymptotics: Denoting by ∅ the link pattern in LP 0 , we have Z ∅ = 1, and for all (2.5) whereα = α/{j, j + 1} ∈ LP N −1 denotes the link pattern obtained from α by removing the link {j, j + 1} and relabeling the remaining indices by the first 2(N − 1) positive integers.
The term "pure" partition function is motivated by the multiple SLE κ pure geometries introduced in the physics literature [BBK05] by M. Bauer, D. Bernard & K. Kytölä (see also [KP16]), predicting (correctly) that Loewner chains associated to the partition functions Z α correspond to critical interfaces joining together according to the given topological connectivity α ∈ LP N (cf. Figure 1.1). In fact, the collection {Z α : α ∈ LP N } forms a basis for a space of multiple SLE κ partition functions of dimension LP N } of functions of boundary points, uniquely determined by the properties (PDE) (2.2), (COV) (2.3), (ASY) (2.5), and the following power-law bound: there exist constants C > 0 and p > 0 such that for all N ≥ 1 and α ∈ LP N , we have (2.6) The existence and uniqueness of the pure partition functions in this form was shown in [PW19, Theorem 1.1] for κ ≤ 4 and in [Wu20, Theorem 1.1] for κ ≤ 6, based on and supplementing various earlier results cited above. For the case of κ = 3, which is the central concern of the present article, the Coulomb gas approach of [BBK05,Dub06,Dub07,FK15b,KP16] has problems (because κ is rational), while the configurational probabilistic approach of [KL07, Law09, PW19, Wu20] gives an explicit construction in terms of total masses of multiple SLE κ measures. We will not need the explicit construction in this work and thus refer the reader to the literature for more details.
More generally, the multiple SLE κ partition functions are defined for any nice polygon (Ω; x 1 , . . . , x 2N ) via their conformal images: if ϕ : Ω → H is any conformal map such that ϕ(x 1 ) < · · · < ϕ(x 2N ), we set (2.7) By the Möbius covariance (2.3) property, this definition is independent of the choice of ϕ. When N = 1, there exists only one multiple SLE κ pure partition function, namely where H Ω (x, y) is the boundary Poisson kernel, that is, the unique function determined by the properties for any conformal map ϕ : Ω → ϕ(Ω). Let us also note that the boundary Poisson kernel has the following useful monotonicity property: for any Dobrushin subdomain (U ; x, y) of (Ω; x, y), we have Lastly, we remark that ratios of partition functions can be defined for general polygons (i.e., when considering ratios, the niceness assumption can be dropped). Namely, suppose that Z 1 and Z 2 are two partition functions, and denote their ratio as P (x 1 , . . . , where ϕ : Ω → H is any conformal map such that ϕ(x 1 ) < · · · < ϕ(x 2N ). The above definition of P is independent of the choice of ϕ thanks to (2.3). We view P (Ω; x 1 , . . . , x 2N ) as the ratio of Z 1 (Ω; x 1 , . . . , x 2N ) and Z 2 (Ω; x 1 , . . . , x 2N ), although these two latter functions may be not well-defined, writing
Useful Properties and Bounds
To end this section, we collect properties of the multiple SLE κ partition functions that will be needed for a priori estimates in Section 3. First, we set B ∅ := 1 and define, for all N ≥ 1 and x 1 < · · · < x 2N , the following bound functions: (2.9) More generally, for each nice polygon (Ω; x 1 , . . . , x 2N ), we set where ϕ is again any conformal map from Ω onto H such that ϕ(x 1 ) < · · · < ϕ(x 2N ). Note that, in the above definition, B α (Ω; x 1 , . . . , x 2N ) and B (N ) (Ω; x 1 , . . . , x 2N ) do not depend on the choice of the conformal map ϕ because B α and B (N ) in (2.9) satisfy the Möbius covariance (2.3) with h = 1/2. In applications, the following strong bound for the pure partition functions is very important: for κ ∈ (0, 6] and for any nice polygon (Ω; x 1 , . . . , x 2N ), we have (2.10) This bound was proved in [PW19, Theorem 1.1] for κ ≤ 4 and [Wu20, Theorem 1.6] for κ ≤ 6, and it was used in [PW19] to prove that the pure partition functions with κ = 4 give formulas for connection probabilities of the level lines of the Gaussian free field with alternating boundary data (see also Appendix A of the present article). Properties of the bound functions B α were crucial in that proof, and they will also play an essential role in the present article, focusing on the case where κ = 3. Note in particular that the bound (2.10) implies the power law bound (2.6).
Analyzing the Ising Partition Function
In this section, we consider the symmetric partition function (2.12) for κ = 3, denoted by Z Ising = Z (N ) Ising , which appears in the denominator of the formula (1.1) for the Ising crossing probabilities. We prove key results needed in the proof of the main Theorem 1.1, concerning the following properties of Z Ising : (3.1) These bounds are not sharp in general, but they are nevertheless sufficient for our purposes.
Another important result in this section is Proposition 3.11, concerning the boundary behavior of the ratios Z α /Z Ising of partition functions when the variables move under a Loewner evolution. Next, in Section 3.1 we collect general identities concerning the bound functions B α and B (N ) . Then, we focus on the case of κ = 3. We prove Propositions 3.1 and 3.2 respectively in Sections 3.2 and 3.3. Finally, we state and prove Proposition 3.11 in Section 3.4. Its proof relies on Propositions 3.1 and 3.2.
Properties of the Bound Functions
To begin, we consider the bound functions B α and B (N ) defined in Section 2.3. In particular, they satisfy properties similar to those appearing in Propositions 3.1 and 3.2 -see Lemmas 3.3 and 3.4. These results can in fact be applied to analyze multiple SLE κ partition functions for any κ ∈ (0, 6]. For instance, in [PW19] related results were used to prove that connection probabilities of level lines of the Gaussian free field are given by multiple SLE κ pure partition functions with κ = 4 (see also Appendix A).
Lemma 3.4. Fix p ≥ 0 and N ≥ 1. We have Proof. We prove (3.3) by induction on N ≥ 1. The initial case N = 1 is trivial. Let then N ≥ 2 and assume that Equation (3.3) holds up to N − 1. A straightforward calculation shows that The product expression in the above formula is the same as the probability P (j,j+1) = P (j+1,j) in (A.2) in Appendix A in Appendix A. Thus, we have Using the above observation, we first prove the upper bound in (3.3): [by the ind. hypothesis] To prove the lower bound in (3.3), we first estimate [by the ind. hypothesis] We now note that j P (j,j+1) ≥ 1, which implies max j P (j,j+1) ≥ 1 2N −1 and shows the lower bound.
Combining this with (3.3), we see that Corollary 3.6. Fix κ ∈ (0, 6] and let Z (N ) be the symmetric partition function (2.12). Then, we have Proof. This follows by combining (2.10) with the upper bound in (3.3) for p = 2h.
Corollary 3.6 with κ = 3 immediately gives the upper bound in Proposition 3.2. However, the ratio Z α /B α can be arbitrarily small, so the lower bound in Proposition 3.2 cannot be derived easily from the lower bound in Lemma 3.4. To establish the lower bound, we first prove a useful identity for B (N ) in Lemma 3.8.
Cascade Asymptotics -Proof of Proposition 3.1
The symmetric partition function Z Ising has an explicit Pfaffian formula, already well-known in the physics literature, and appearing, e.g., in [KP16,Izy17,PW19] in the context of SLEs. To state it, we use the following notation: we let Π N denote the set of all pair partitions with the convention that a 1 < a 2 < · · · < a N and a j < b j for all j ∈ {1, . . . , N }. We also denote by sgn( ) the sign of the partition defined as the sign of the product (a − c)(a − d)(b − c)(b − d) over pairs of distinct elements {a, b}, {c, d} ∈ . Note that the set of link patterns LP N is the subset of Π N consisting of planar pair partitions. With this notation, the function (2.12) with κ = 3 reads (see, e.g., [PW19,Lemma 4.13] and [KP16, Proposition 4.6] for a proof) for Ω = H, and it is again defined for general nice polygons via conformal covariance: with any conformal map ϕ : Ω → H such that ϕ(x 1 ) < · · · < ϕ(x 2N ). The formula (3.8) shows that, up to a sign, the function Z Ising satisfies Möbius covariance also for maps that move the point ∞.
Ising (ϕ(x ι(1) ), . . . , ϕ(x ι(2N ) )), where {x ι(1) , . . . , x ι(2N ) } = {x 1 , . . . , x 2N } satisfy ϕ(x ι(1) ) < · · · < ϕ(x ι(2N ) ) and |ι| is the number of indices j in {2, 3, . . . , 2N } such that ϕ(x ι(j) ) < ϕ(x ι(1) ). Note that ι is a cyclic permutation of {1, . . . , 2N } and the Pfaffian function Pf(z 1 , . . . , z 2N ) is odd in the sense that for any i < j, we have Pf(z 1 , . . . , z i , . . . , z j , . . . , z 2N ) = −Pf(z 1 , . . . , z j , . . . , z i , . . . , z 2N ). (3.10) From the Pfaffian formula, it is not obvious that Z Ising > 0, but this is indeed the case: the positivity follows, e.g., from its definition (2.12) and the fact that each Z α is positive by [PW19, Theorem 1.1]: (3.11) Alternatively, as pointed out by the referee, by using Lemma 3.9 and studying the asymptotics of (3.8), one may check that Z Ising > 0 from the formula (3.12) below. Our next aim is to prove the cascade asymptotics property, Proposition 3.1, for the function Z Ising . In general, limiting behavior of functions of several variables is rather delicate, and indeed, even with the explicit formula (3.8) for Z Ising , the analysis of its behavior as the variables tend together is nontrivial. The problem with using formula (3.8) is that it includes a sum of positive and negative terms (which could in principle lead to cancellations and a signed expression). However, thanks to the following Hafnian identity, a sum over non-negative terms, we are able to carry out the analysis required for Proposition 3.1. This identity is well-known in the literature [ID89,DFMS97]: it is a manifestation of "bosonization identities" for the Ising model. For completeness, we give a proof for it here.
Lemma 3.9. The following identity holds for all z 1 , . . . , z 2N ∈ C with z i = z j for all i = j: (3.12) Proof. Expanding the square of the Pfaffian (3.8), we have We see from the asserted formula (3.12) that the diagonal terms = yield the desired Hafnian expression, and therefore, we only need to prove that all of the off-diagonal terms in (3.13) cancel out. To establish this, we use induction on N ≥ 1. The initial case N = 1 is clear. Let us then assume that Fix the variables z 1 , . . . , z 2N −1 ∈ C at arbitrary distinct positions, denote by and consider the following meromorphic function of z ∈ C: F (z) := (Pf(z 1 , . . . , z 2N −1 , z)) 2 − Hf(z 1 , . . . , z 2N −1 , z) where (2N ) (resp. (2N )) is the pair of 2N in (resp. ), that is, { (2N ), 2N } ∈ (resp. { (2N ), 2N } ∈ ). We aim to prove that the function F is identically zero. F (z) vanishes at z → ∞ and it can only have poles of degree at most two at z = z j for some j ∈ {1, 2, . . . , 2N − 1}. We will show that these points are in fact not poles. By symmetry, it suffices to consider the Laurent series expansion of F at ε = z − z 2N −1 , and by translation invariance, F depends on z 2N −1 and z only via their difference. In particular, F is a meromorphic function of ε ∈ C and the Laurent series reads (3.15) To exclude the first order poles, we show that the right-hand side of (3.15) is an even function of ε. Notice that the Pfaffian function Pf(z 1 , . . . , z 2N ) is odd in the sense of (3.10), while the function Hf(z 1 , . . . , z 2N ) is even in the sense that for any i < j, we have Hf(z 1 , . . . , z i , . . . , z j , . . . , z 2N ) = Hf(z 1 , . . . , z j , . . . , z i , . . . , z 2N ).
In particular, choosing i = 2N − 1 and j = 2N , we have
Now, the Laurent series expansion of the second line as a function of
and because δ = −ε, this shows that the Laurent series expansion (3.15) in ε = z − z 2N −1 is invariant under the flip ε → −ε. Therefore, A −1 ≡ 0 in (3.15). The term of order ε −2 in (3.15) is given by those terms in (3.14) which satisfy (2N ) = (2N ) = 2N − 1, that is, {2N − 1, 2N } ∈ ∩ . Removing this pair from both and results in two different pair partitions of 2N − 2 points, which have the same signs as and , respectively. Therefore, we have This expression is zero by the induction hypothesis. Thus, the function F : C → C has no poles.
In conclusion, we have shown that F is an entire function with lim z→∞ F (z) = 0. Therefore, it is bounded and by Liouville's theorem, F ≡ 0. This shows that (Pf) 2 ≡ Hf and implies (3.12).
Upper and Lower Bounds -Proof of Proposition 3.2
The purpose of this section is to finish the proof of Proposition 3.2, that is, to show (3.1): Proof. The upper bound in (3.16) follows directly from Corollary 3.6 with κ = 3. To prove the lower bound in (3.16), a slightly more clever calculation is needed, where the Hafnian identity of Lemma 3.9 plays a crucial role. The lower bound follows from Lemma 3.10 below.
Lemma 3.10. For any N ≥ 1, we have Ising (x 1 , . . . , x 2N ). (3.17) Proof. We prove (3.17) by induction on N ≥ 1. The initial case N = 1 follows from the equality Ising . Let then N ≥ 2 and assume that Equation (3.17) holds up to N − 1. By expanding in Lemma 3.9 the sum over pair partitions ∈ Π N into terms according to the pair j of the last point 2N , we obtain Z (N ) where y j i := x i for 1 ≤ i ≤ j − 1 and y j i := x i+1 for j ≤ i ≤ 2N − 2. Now, we have Å Z (N ) Recalling Lemma 3.8 and separating the sum according to the parity of j, we obtain Because all terms are non-negative, dropping the sum over even j yields the rough lower bound (3.18) Now, using Remark 3.7, we see that the last product is in fact larger than one: Finally, we note that where P (j,2N ) is the probability in (A.2). In particular, we have 1≤j<2N, j is odd Using the Cauchy-Schwarz inequality, we eventually get the following lower bound for the square: Because by (3.11) and (2.9), we have Z Ising > 0 and B > 0, this gives the lower bound (3.17).
Boundary Behavior
The final result of this section concerns the boundary behavior of the ratios Z α /Z Ising of partition functions when the variables move under a Loewner evolution. It is crucial for proving Theorem 1.1 in Section 5.
Loewner Chains Associated to Partition Functions When κ = 3
Fix j ∈ {1, 2, . . . , 2N }. Recall from Sections 2.1 and 2.2 that, for launching points x 1 < · · · < x 2N on R = ∂H, the Loewner chain associated to an SLE κ partition function Z starting from x j is the process with driving function W given by the SDEs (2.4). In this section, we specialize to the case where κ = 3 and consider the Loewner chain associated to Z Ising starting from x j . The Loewner chain is well-defined up to the first time when it swallows a spectator point: (4.1) We will see that the Loewner chain is generated by a continuous curve η up to the swallowing time T , and it terminates on the real line at time T , that is, η(T ) ∈ R. However, the continuity of the curve as it approaches the swallowing time is difficult to prove in general. The main purpose of this section is to analyze the behavior of the Loewner chain when approaching the swallowing time and to prove the continuity up to and including T . The main result of this section is Theorem 4.1 stated below, which we prove in Section 4.2. Proposition 3.2 plays an important role in the proof.
Theorem 4.1 . Fix κ = 3, N ≥ 1, and j ∈ {1, 2, . . . , 2N }. The Loewner chain associated to Z Ising with launching points (x 1 , . . . , x 2N ) starting from x j is almost surely generated by a continuous simple curve (η(t), 0 ≤ t ≤ T ) up to and including T . This curve almost surely terminates at one of the points {x k : k ∈ I j } and touches R only at its two endpoints. Furthermore, for any k ∈ I j , the probability of terminating at x k is given by (4.2) Moreover, conditionally on the event {η(T ) = x k }, the law of η is that of the SLE 3 curve γ in (H; x j , x k ) weighted by the Radon-Nikodym derivative The outline of this section is as follows. In Section 4.1, we decompose the symmetric partition function
Cascade Relation for Partition Functions
Fix a nice polygon (Ω; x 1 , . . . , x 2N ) and α ∈ LP N . Given any link {a, b} ∈ α, let γ be the SLE 3 curve in Ω from x a to x b , and assume that a < b for notational simplicity. Then, the link {a, b} divides the link pattern α into two sub-link patterns, connecting respectively the points {a + 1, . . . , b − 1} and {b + 1, . . . , a − 1}. After relabeling of the indices, we denote these two link patterns by α R and α L . Then, we have the following cascade relation for the pure partition functions [PW19, Proposition 3.5]: As a consequence, we obtain a similar cascade relation for the symmetric partition function Z Ising .
Proof of Lemma 4.2. Without loss of generality, we may assume that a < b. The identity (4.5) then follows by summing over all possible α L and α R in the cascade relation (4.4):
Continuity of the Loewner Chain -Proof of Theorem 4.1
We prove Theorem 4.1 by induction on N ≥ 1. There is nothing to prove in the initial case N = 1, so we assume that N ≥ 2 and that Theorem 4.1 holds for all j ∈ {1, 2, . . . , 2(N − 1)}. For definiteness, we also assume that j = 1 and prove Theorem 4.1 for the Loewner chain associated to Z Ising with launching points (x 1 , . . . , x 2N ) starting from x 1 . We denote by (g t , t ≥ 0) the corresponding conformal maps, and we denote the Loewner chain by η (with suggestive notation anticipating that it will be generated by a curve). We break the proof into several lemmas, each addressing a part of the claim. Proof. Letη be the Loewner chain associated to Z (N −1) Ising with launching points (x 1 , . . . , x 2N −2 ) starting from x 1 . From the discussion after (2.4), we see that the law of η is the same as the law ofη tilted by the following local martingale, for small enough t: . (4.6) • By the bounds (3.2) and (3.7), we have [by (3.7)] • By the monotonicity property (2.8), we have In conclusion, we obtain the following bound for R: From this, we see that for any > 0, the local martingale R t is bounded (and hence a true martingale) up to the stopping time Hence, for any > 0, the law of η is absolutely continuous with respect to the law ofη up to time S . Therefore, on the event E 2N −1 that η accumulates in the interval (x 1 , x 2N −1 ), the law of η is absolutely continuous with respect to the law ofη. Thus, the claim follows by the induction hypothesis onη.
Proof. Using the same notation as in the proof of Lemma 4.3, we recall that η has the law ofη tilted by the local martingale (4.6), which we write in the form By Lemma 4.3, on the event {η(T ) = x 2 }, the curve η is continuous up to and including T . Hence, Proposition 3.1 and the conformal covariance (2.7) show that as t → T , we have , almost surely.
Furthermore, the convergence also holds in L 1 (P[· |η(T ) = x 2 ]), whereP[· |η(T ) = x 2 ] denotes the law ofη conditioned on the event {η(T ) = x 2 }, because R t∧T is a positive and uniformly integrable martingale under the measureP[· |η(T ) = x 2 ] for the following reason. For m ≥ 1, denote byP * m the law ofη tilted by the martingale R t∧T ∧S 1/m , where S 1/m is the stopping time defined in (4.7) (so that R t∧T ∧S 1/m is bounded for all t). Then,P * m is the same as the law of η up to time S 1/m . On the one hand, the sequencesP * m are manifestly consistent in m, so by Kolmogorov's extension theorem, there exists a probability measureP * such that underP * the curve has the same law as η up to time S 1/m , for any m ≥ 1. On the other hand, by Lemma 4.3 the Loewner chain η is almost surely continuous up to and including T and only touches R at x 1 and x 2 . Thus, on the event {η(T ) = x 2 }, there is almost surely a positive distance between η[0, T ] and the points {x 2 +1 , x 2 +2 , . . . , x 2N }. In particular, almost surely on the event {η(T ) = x 2 }, we have It follows that (see, e.g., [Dur10,Theorem 5.3.3]) the law of η conditioned on {η(T ) = x 2 } is the same aŝ P * conditioned on the same event, which is the same asP[· |η(T ) = x 2 ] tilted by R t∧T . This gives the uniform integrability of the martingale R t∧T . With the L 1 -convergence, we conclude that ò . (4.9) Now, by the induction hypothesis onη, on the event {η(T ) = x 2 }, the lawP ofη is that of the SLE 3 curve γ in H from x 1 to x 2 weighted by the Radon-Nikodym derivative (4.10) Combining (4.10) with (4.9), we obtain 2 Now, we use Lemma 4.2 with a = 2 and b = 1 to evaluate the numerator, obtaining On the other hand, by the induction hypothesis onη, we know that (4.12) Proof. We still use the same notation as in the proofs of .
On the other hand, on the event {η(T ) = x 2 }, the law ofη is the same as the law of γ weighted by the Radon-Nikodym derivative (4.10). Therefore, conditionally on the event {η(T ) = x 2 }, the law of η is the same as the law of γ weighted by the Radon-Nikodym derivativê This gives the asserted formula (4.12) due to (4.11) and (4.8).
Lemma 4.6. Assume that Theorem 4.1 holds for N − 1. DefineẼ 2N −1 to be the event that the Loewner chain η associated to Z Ising with launching points (x 1 , . . . , x 2N ) starting from x 3 accumulates in the semiopen interval [x 1 , x 3 ). Then, on the eventẼ 2N −1 , the Loewner chain η is almost surely generated by a continuous simple curve up to and including T , which almost surely terminates at x 2 and touches R only at its two endpoints. Moreover, the law of η is that of the SLE 3 curve γ in H from x 3 to x 2 weighted by the Radon-Nikodym derivative (4.13) Proof. Letη be the Loewner chain associated to Z (N −1) Ising with launching points (x 1 , . . . , x 2N −2 ) starting from x 3 , and denote its law byP. Then, similarly as in the proof of Lemma 4.3, we see that the law of η is the same as the law ofη tilted by the following local martingale, for small enough t: .
Thus, similar analysis as in the proof of Lemma 4.3 shows that, on the eventẼ 2N −1 , the law of η is absolutely continuous with respect to the law ofη. In particular, η is almost surely (on the eventẼ 2N −1 ) generated by a continuous simple curve up to and including T , terminating almost surely at x 2 .
Then, by similar analysis as in the proof of Lemma 4.4, we obtaiñ .
[by (2.7)] Thus, the law of η is the same as the law ofη tilted byR t∧T . Also, by the induction hypothesis onη, the law ofη is the same as γ weighted by Hence, we see that the law of η is that of γ weighted by the Radon-Nikodym derivative (4.13).
Corollary 4.7. Assume that Theorem 4.1 holds for N − 1. Then, Theorem 4.1 also holds for N .
Proof. For definiteness, we assume that j = 1. Lemmas 4.3 and 4.6 together with the conformal covariance (3.9) of the partition function Z Ising show that the Loewner chain associated to Z Ising with launching points (x 1 , . . . , x 2N ) starting from x 1 is almost surely generated by a continuous simple curve (η(t), 0 ≤ t ≤ T ) up to and including T , and that this curve almost surely terminates at one of the points {x 2 , x 4 , . . . , x 2N }. Lemmas 4.4 and 4.6 then imply that for any ∈ {1, 2, . . . , N − 1}, we have Since Z Ising = α Z α and since the total probabilities sum up to one, this also gives Thus, we conclude that the asserted formula (4.2) with j = 1 holds. Lastly, Lemma 4.5 shows that for all ∈ {1, 2, . . . , N − 1}, conditionally on the event {η(T ) = x 2 }, the law of η is that of the SLE 3 curve γ in H from x 1 to x 2 weighted by the Radon-Nikodym derivative (4.12), which gives the asserted formula (4.3) on these events. On the other hand, Equation (4.13) from Lemma 4.6 together with the covariance (3.9) shows that, conditionally on the event {η(T ) = x 2N }, the law of η is that of the SLE 3 curve γ in H from x 1 to x 2N weighted by This gives the asserted formula (4.3) on this event and concludes the proof.
With Corollary 4.7, the proof of Theorem 4.1 is complete by induction.
Crossing Probabilities in the Critical Ising Model
Let Ω δ be a family of finite subgraphs of the rescaled square lattice δZ 2 , for δ > 0, together with 2N fixed boundary points x δ 1 , . . . , x δ 2N for each Ω δ in counterclockwise order. As illustrated in Figure 1.1 in Section 1, we consider the critical Ising model on (Ω δ ; x δ 1 , . . . , x δ 2N ) with alternating boundary conditions: where (x δ i x δ i+1 ) stands for the counterclockwise boundary arc from x δ i to x δ i+1 , with the convention that x δ 2N = x δ 0 and x δ 2N +1 = x δ 1 . In this setup, each Ising model configuration on Ω δ contains N random macroscopic interfaces which connect pairwise the 2N boundary points x δ 1 , . . . , x δ 2N . When N ≥ 2, these interfaces can form more than one possible connectivity pattern, as illustrated in Figure 1.1.
Suppose that (Ω δ ; x δ 1 , . . . , x δ 2N ) approximate some polygon (Ω; x 1 , . . . , x 2N ) as δ → 0, as detailed below. K. Izyurov proved in his article [Izy17] that "locally", the scaling limits of these interfaces are given by the Loewner chain (2.4) with κ = 3 and Z = Z Ising . In Section 5.2, we briefly explain how to extend this result to a "global" one, that is, to establish the convergence for the whole curves instead only up to a stopping time. The proof crucially relies on the continuity of the Loewner chain, Theorem 4.1.
In this article, we are primarily interested in the probability that the Ising interfaces form a given connectivity encoded in a link pattern α. In general, formulas for such crossing probabilities are not known -a few special cases appear in [Izy15]. Nevertheless, in this section we will prove Theorem 1.1, which shows that the critical Ising crossing probabilities do indeed have a conformally invariant scaling limit (cf. Corollary 1.2), specified as the ratio Z α /Z Ising of partition functions discussed in Sections 2-3. Interestingly, this ratio also gives a characterization of the Ising crossing probabilities in terms of a c = 1/2 conformal field theory: indeed, the probability amplitudes Z α can be seen as correlation functions of a degenerate field with conformal weight h 1,2 = 1/2, associated to the free fermion (or the energy density) on the boundary -for more discussion on these concepts, see, e.g., the textbooks [ID89,DFMS97], the article [Pel19], and the results in [Hon10,HS13].
Ising Model
To begin, we fix notation to be used throughout. We consider finite subgraphs G = (V (G), E(G)) of the (possibly translated, rotated, and rescaled) square lattice Z 2 . We call two vertices v and w neighbors if their Euclidean distance equals one, and we then write v ∼ w. We denote the inner boundary of G by The dual lattice (Z 2 ) * is a translated version of Z 2 : its vertex set is (1/2, 1/2) + Z 2 and its edges are given by all pairs of vertices that are neighbors. The vertices and edges of (Z 2 ) * are called dual-vertices and dual-edges, while we sometimes call the vertices and edges of Z 2 primal-vertices and primal-edges. In particular, for each primal-edge e of Z 2 , we associate a unique dual-edge, denoted by e * , that crosses e in the middle. For a subgraph G of Z 2 , we define G * to be the subgraph of (Z 2 ) * with edge set E(G * ) = {e * : e ∈ E(G)} and vertex set given by the endpoints of these dual-edges.
We define a discrete Dobrushin domain to be a triple (G; v, w) with v, w ∈ ∂G, v = w, where G is a finite connected subgraph of Z 2 such that the complement of G is also connected (that is, G is simply connected). The boundary ∂G is divided into two arcs (v w) and (w v), where Dobrushin boundary conditions for the Ising model will be specified. We also define a discrete polygon to be a (2N + 1)-tuple (G; v 1 , . . . , v 2N ), where v 1 , . . . , v 2N ∈ ∂G are distinct boundary vertices in counterclockwise order. In this case, the boundary ∂G is divided into 2N arcs, where alternating boundary conditions will be specified. We also let G denote the simply connected domain formed by all of the faces, edges, and vertices of G.
The Ising model on G is a random assignment σ = (σ v ) v∈V (G) ∈ { , ⊕} V (G) =: Σ G of spins. With free boundary conditions, the probability measure of the Ising model is given by the Boltzmann measure with inverse-temperature β > 0 and Hamiltonian Only the neigboring spins interact with each other. This model exhibits an order-disorder phase transition [MCW73]: there exists a critical temperature such that above it, the Ising configurations are disordered, and below it, large clusters of equal spins appear. For the square lattice, the critical inversetemperature can be found exactly: At criticality, the system does not have a typical length scale, and using renormalization arguments, physicists argued, e.g., in [BPZ84,Car96], and mathematicians later proved in a series of works starting from [Smi06,Smi10], that the model becomes conformally invariant in the scaling limit. In this article, we consider the scaling limit of the Ising model at criticality and verify a feature of its conformal invariance. For ∈ { , ⊕} Z 2 , we define the Ising model µ β,G with boundary condition via the Hamiltonian The Ising model satisfies the following domain Markov property, which enables the martingale argument that will be used to prove Theorem 1.1 in Section 5.3. Suppose G ⊂ G and fix boundary condition ∈ { , ⊕} Z 2 for the Ising model on G . If X is a random variable which is measurable with respect to the status of the vertices of the smaller graph G, then we have If (G; v, w) is a discrete Dobrushin domain, we may consider the Ising model with Dobrushin boundary conditions (domain-wall boundary conditions): we set ⊕ along the arc (v w), and along the complementary arc (w v). More generally, in a discrete polygon (G; v 1 , . . . , v 2N ), we consider alternating boundary conditions, where ⊕ and alternate along the boundary as in (5.1) (see also Figure 1.1).
Note that the spins lie on the primal-vertices v ∈ G, while interfaces lie on the dual lattice G * . Let v * 1 , . . . , v * 2N be dual-vertices nearest to v 1 , . . . , v 2N , respectively. Then, given s ∈ {1, 2, . . . , N }, we define the Ising interface starting from v * 2s as follows. It starts from v * 2s , traverses on the dual-edges, and turns at every dual-vertex in such a way that it always has primal-vertices with spin ⊕ on its left and spin on its right. If there is an indetermination when arriving at a vertex (this may happen on the square lattice), it turns left. The Ising interface starting from v * 2s−1 is defined similarly with the left/right switched. We focus on scaling limits of the Ising model on planar domains: we let G = Ω δ be a subgraph of the rescaled square lattice 3 δZ 2 with small δ > 0, which will tend to zero. Our precise approximation scheme is the following [Kar18, Section 4.3]. We say that a sequence of discrete polygons (Ω δ ; x δ 1 , . . . , x δ 2N ) converges as δ → 0 to a polygon (Ω; x 1 , . . . , x 2N ) in the close-Carathéodory sense if it converges in the Carathéodory sense, and in addition, for each j ∈ {1, 2, . . . , 2N }, we have x δ j → x j as δ → 0 and the following is fulfilled: Given a reference point z ∈ Ω and r > 0 small enough, let S r be the arc of ∂B(x j , r) ∩ Ω disconnecting (in Ω) x j from z and from all other arcs of this set. We require that, for each r small enough and for all sufficiently small δ (depending on r), the boundary point x δ j is connected to the midpoint of S r inside Ω δ ∩ B(x j , r).
We emphasize that the Ising spins lie on the primal-vertices the interfaces traverse on the dual graph. However, we shall abuse notation by writing Ω δ for both Ω δ and (Ω * ) δ , and x δ for both x δ and (x * ) δ .
Convergence of Interfaces
In this section, we first summarize some existing results on the convergence of Ising interfaces, and then explain how to extend Izyurov's result [Izy17] on the local convergence of multiple interfaces to be global. The convergence will take place weakly in the space of unparameterized curves with metric (2.1).
Starting from the celebrated work of S. Smirnov [Smi06,Smi10], conformal invariance for correlations [CI13,HS13,CHI15,CHI21] and interfaces [HK13, CDCH + 14, Izy17, BH19, BPW21] for the critical planar Ising model has now been verified. The key tool in this work is the so-called discrete holomorphic fermion, developed by Smirnov with D. Chelkak [CS12]. This led in particular to the convergence of the Ising interface in Dobrushin domains [CDCH + 14]: if (Ω δ ; x δ , y δ ) is a sequence of discrete Dobrushin domains converging to a Dobrushin domain (Ω; x, y) in the close-Carathéodory sense, then, as δ → 0, the interface of the critical Ising model on (Ω δ ; x δ , y δ ) with Dobrushin boundary conditions converges weakly to the chordal SLE 3 in (Ω; x, y). Later, C. Hongler, K. Kytölä, and K. Izyurov extended the discrete holomorphic fermion to more general settings [HK13,Izy15,Izy17]. In particular, it follows from Izyurov's work [Izy17, Theorem 1.1] that the Ising interfaces in discrete polygons with alternating boundary conditions (5.1) converge to multiple SLE 3 curves in the following local sense.
To establish the convergence globally, that is, without a cutoff time, we need three pieces of input: 1. the local convergence, as explained above; 2. the fact that the limiting curveη j of ϕ −1 δ (η δ j (t)) is continuous up to and including the first swallowing time (4.1) of one of the other marked points, denoted T ; and 3. the fact that the Loewner chain η is continuous up to and including the same swallowing time (4.1).
With these three facts at hand, we know thatη j has the same law as η up to the cutoff time T r by Input 1, and letting r → 0, we find thatη j and η have the same law up to and including T thanks to of Inputs 2 and 3. Among the three pieces of input, we have the local convergence (Input 1) from [Izy17] and [KS17]. The continuity of the scaling limit (Input 2) is a consequence of the Russo-Seymour-Welsh estimate for the Ising model from [CDCH16, Corollary 1.7] combined with the results in [AB99,KS17], as argued in [Izy15, Remark 3.2]. Finally, we established the (non-trivial) continuity of the Loewner chain η (Input 3) in Theorem 4.1 in the previous section. In summary, we have the following convergence.
Remark 5.2. In [BPW21, Theorem 1.2], it was proved that for each α ∈ LP N , there exists a unique "global" multiple SLE 3 associated to α. This is a probability measure supported on families of curves with the given topological connectivity α. Furthermore, it follows from [PW19, Lemma 4.8 & Proposition 4.9] that these curves are given by Loewner chains whose driving functions satisfy the SDEs (2.4) with Z the pure partition function Z α for κ = 3. On the other hand, since Z Ising = α Z α , it follows from Proposition 5.1 and the so-called local commutation property for multiple SLEsà la Dubédat [Dub07] (cf. [KP16,Sampling Procedure A.3] and [PW19, Corollary 1.2]) that the probability measure of the collection of limit curves of the Ising interfaces is the global multiple SLE 3 which is a convex combination of the extremal multiple SLE 3 probability measures associated to the various possible connectivity patterns of the interfaces -see also the recent work [Kar19]. From this, we see that the probability amplitudes Z α can be thought of as a manifestation of Doob's h-transform. Making this precise, however, would require some further technical work.
Crossing Probabilities -Proof of Theorem 1.1
Now we are ready to prove the main result of this article: Theorem 1.1. Let (Ω; x 1 , . . . , x 2N ) be a bounded polygon with N ≥ 1, and suppose that discrete polygons (Ω δ ; x δ 1 , . . . , x δ 2N ) on δZ 2 converge to (Ω; x 1 , . . . , x 2N ) as δ → 0 in the close-Carathéodory sense. Consider the critical Ising model on Ω δ with alternating boundary conditions. Denote by ϑ δ the random connectivity pattern in LP N formed by the N discrete interfaces with law P δ . Then, we have , for all α ∈ LP N , where Z (N ) and {Z α : α ∈ LP N } is the collection of functions uniquely determined as the solution to the PDE boundary value problem given in Definition 2.1, also known as pure partition functions of multiple SLE κ with κ = 3.
Proof. We prove the claim by induction on N ≥ 1. It is trivial for N = 1 because both sides of (1.1) equal one. Thus, we assume that the claim holds for N − 1, fix α ∈ LP N and aim to prove the claim for for any convergent subsequence. Note that the right-hand side is conformally invariant by the Möbius covariance (2.3) with h = 1/2. For topological reasons, the link pattern α contains at least one link of type {j, j +1}. For definiteness, we assume that j = 1, so {1, 2} ∈ α. From Proposition 5.1, we see that (ϕ −1 δn (η δn 1 ), 0 ≤ t ≤ T δn ) converges weakly in the metric (2.1) to (η(t), 0 ≤ t ≤ T ), that is the Loewner chain associated to the partition function Z Ising with launching points (x 1 , . . . ,x 2N ) starting fromx 1 . For convenience, we couple them by the Skorohod representation theorem in the same probability space so that they converge almost surely.
First, let us analyze the limit curve η. Using the notation g t (x i ) = V i t , for all i = 1, we define The partial differential equations (2.2) show that M t is a local martingale. Let us consider the limit of M t as t → T . From Theorem 4.1, we know that the curve η is almost surely continuous up to and including T and terminates at one of the points {x 2 ,x 4 , . . . ,x 2N }. Denote by D η the unbounded connected component of H \ η[0, T ], and byα = α/{1, 2} ∈ LP N −1 . On the event {η(T ) =x 2 }, we see from the strong asymptotics properties (2.11) and (2.13) and the conformal covariance property (2.7) that , almost surely.
In summary, we have , almost surely.
Since 0 < Z α /Z Ising ≤ 1 due to (2.12), we see that (M t , t ≤ T ) is a bounded martingale. The optional stopping theorem then gives the identity M 0 = E[M T ], that is, Next, let us consider the discrete interface η δn 1 . For simplicity of notation, we shall use the superscript "n" instead of "δ n " in what follows. On the event {η n 1 (T n ) = x n 2 }, we denote by D n the connected component of Ω n \ η n 1 with x n 3 , . . . , x n 2N on its boundary. Since ϕ −1 n (η n 1 ) converges to the continuous simple curve η in H that intersects the boundary R only at its two endpoints by Theorem 4.1, we see that, as n → ∞, the polygon (D n ; x n 3 , . . . , x n 2N ) converges almost surely to the polygon (ϕ −1 (D η ); x 3 , . . . , x 2N ) in the close-Carathéodory sense. Hence, using the domain Markov property of the Ising model, the induction hypothesis, and the conformal invariance of the right-hand side of (1.1) and the conformal invariance of the SLE 3 type curve η, we have whereP n is the law of the Ising interfaces on the random polygon (D n ; x n 3 , . . . , x n 2N ), measurable with respect to η n 1 , which form a random connectivity pattern ϑ n ∈ LP N −1 , and where, by the Skorohod representation theorem, we couple all of the random variables on the same probability space so that the convergence takes place almost surely. Thus, we conclude (using the bounded convergence theorem) that This completes the induction step and finishes the proof of Theorem 1.1.
Proof of Corollary 1.2. The asserted properties follow from the corresponding properties of the multiple SLE 3 partition functions Z α and Z Ising by using the right-hand side of (1.1): the conformal invariance follows from the Möbius covariance (2.3) with h = 1/2; the asymptotics is a consequence of (2.5, 2.13) with h = 1/2; and the PDEs (1.2) are given by the PDEs (2.2) with κ = 3 and h = 1/2.
A Connection Probabilities for Level Lines of GFF
In the proof of Lemma 3.4 in Section 3, we use the following facts concerning the level lines of the Gaussian free field (GFF). We shall not define the GFF nor its level lines precisely, because they are not needed to understand the present article. The reader may find background on this topic, e.g., in [She07,SS13,WW17]. Importantly, the level lines are SLE κ type curves with κ = 4. Fix a constant λ = π/2. Let Γ be the GFF in H with alternating boundary data: R. Kenyon and D. Wilson have found explicit formulas for crossing probabilities in the double-dimer model [KW11]. These formulas are combinatorial expressions involving the inverse Kasteleyn matrix. On the other hand, explicit formulas of similar type were obtained in [KW11,PW19] for the connection probabilities of the GFF level lines appearing in (A.1), where the inverse Kasteleyn matrix gets replaced by the boundary Poisson kernel. Using Kenyon's results [Ken00], it should be possible to explicitly check that in suitable approximations, the double-dimer crossing probabilities converge in the scaling limit to (A.1). Note, however, that the convergence of double-dimer interfaces to the SLE 4 still remains conjectural.
B Connection Probabilities for Loop-Erased Random Walks
R. Kenyon and D. Wilson also found in [KW11] determinantal formulas (analogous to Fomin [Fom01]) for connectivity probabilities for multichordal loop-erased random walks (LERW). It follows from these formulas and the discrete complex analysis developed in [CFL28,CS11] that the multichordal LERW connectivity probabilities converge (when suitably renormalized) to the pure partition functions of multiple SLE κ with κ = 2 -for a proof, see, e.g., [KKP20, Theorems 3.16 and 4.1] and [Kar20, Theorem 2.2].
In these results, the multichordal LERWs are realized as boundary touching branches in a uniform spanning tree (UST) with wired boundary conditions 4 . Such curves converge in the scaling limit to multiple SLE κ curves with κ = 2 [Sch00, LSW04, Zha08, Kar19]. The reader can find the precise definitions of these objects, as well as the detailed setup for the scaling limit results, in [KKP20, Section 3] -see also the recent [Kar20]. To give a satisfactory statement, we translate the notations used there into the notations used in the present article. Recall from Section 5.1 that, for a finite subgraph G = (V (G), E(G)) of Z 2 , we denote the inner boundary of G by ∂G = {v ∈ V (G) : ∃ w ∈ V (G) such that v, w ∈ E(Z 2 )}. We also define the outer boundary (called boundary in [KKP20]) of G as the vertices w ∈ V (G) for which there exists a vertex v ∈ V (G) such that v, w ∈ E(Z 2 ). Then, as in [KKP20, Section 3], we call the edges e = v, w boundary edges of G, and we denote v = e • and w = e ∂ . Now, for a discrete polygon (G; v 1 , . . . , v 2N ), we may consider those branches in the UST that start from the vertices v 1 , . . . , v 2N . Importantly, these discrete curves may form other topological configurations than those labeled by the link patterns -the curves can merge in various ways. However, for each link pattern α = {{a 1 , b 1 }, . . . , {a N , b N }} ∈ LP N , we can consider the probability that the discrete curves form the connectivity pattern α in the following sense. We group the marked vertices v 1 , . . . , v 2N into two groups {v 2s−1 : s = 1, 2, . . . , N } and {v 2s : s = 1, 2, . . . , N }. We form a modified discrete polygon by replacing the latter group by {w 2s : s = 1, 2, . . . , N }, where v 2s and w 2s form a boundary edge e 2s = v 2s , w 2s , so that v 2s = e • 2s and w 2s = e ∂ 2s . Then, we may consider the event that there exist N branches in the UST connecting v as to w bs , for all s ∈ {1, 2, . . . , N }, with the convention that in α, the indices a 1 , . . . , a N are odd and the indices b 1 , . . . , b N are even. Adapting the notations in [KKP20], for each s ∈ {1, 2, . . . , N }, we denote the corresponding event by and {Z α : α ∈ LP N } is the collection of functions uniquely determined as the solution to the PDE boundary value problem given in Definition 2.1, also known as pure partition functions of multiple SLE κ with κ = 2.
We remark the normalization factor δ −2N in the connection probability (B.1), absent from Theorem 1.1 for the Ising case, as well as the absence of the symmetric partition function Z LERW := α Z α . In fact, a formula for Z LERW is known [PW19, Lemma 4.12], and one could also consider the UST conditioned on the event that the branches starting from the vertices x δ 1 , . . . , x δ 2N connect according to some (random) link pattern ϑ δ that belongs to LP N . This conditioning accounts to dividing by Z LERW , and with the conditioned UST probability measureP δ LERW , we have (see [ Here, the powers δ −2N are cancelled by the conditioning, resulting in the normalization factor Z LERW on the right-hand side. Instead of Theorem B.1 which crucially relies on exact solvability of the crossing probabilities in the discrete model in terms of Fomin's formulas [Fom01], similar ideas as in the case of the Ising model could be used here. For such an approach, the main inputs would be the following: 1. local convergence of the branches to multiple SLE 2 curves (proven in [Kar20, Theorem 2.1]); 2. continuity of the limiting curve up to and including the swallowing time of the spectator points (proven in [Kar19, Theorem 6.8]); and 3. the fact that the Loewner chain associated to the partition function Z LERW is continuous up to and including the same stopping time. Item 3 could be shown similarly as Theorem 4.1, provided that one first proves analogues of Propositions 3.1 and 3.2 for κ = 2. | 18,384.8 | 2018-08-28T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Whole-Genome-Based Helicobacter pylori Geographic Surveillance: A Visualized and Expandable Webtool
Helicobacter pylori exhibit specific geographic distributions that are related to clinical outcomes. Despite the high infection rate of H. pylori throughout the world, the genetic epidemiology surveillance of H. pylori still needs to be improved. This study used the single nucleotide polymorphisms (SNPs) profiling approach based on whole genome sequencing (WGS) to facilitate genomic population analyses of H. pylori and encourage the dissemination of microbial genotyping strategies worldwide. A total number of 1,211 public H. pylori genomes were downloaded and used to construct the typing tool, named HpTT (H. pylori Typing Tool). Combined with the metadata, we developed two levels of genomic typing, including a continent-scale and a country scale that nested in the continent scale. Results showed that Asia was the largest isolate source in our dataset, while isolates from Europe and Oceania were comparatively more widespread. More specifically, Switzerland and Australia are the main sources of widespread isolates in their corresponding continents. To integrate all the typing information and enable researchers to compare their dataset against the existing global database easily and rapidly, a user-friendly website (https://db.cngb.org/HPTT/) was developed with both genomic typing tools and visualization tools. To further confirm the validity of the website, ten newly assembled genomes were downloaded and tested precisely located on the branch as we expected. In summary, the H. pylori typing tool (HpTT) is a novel genomic epidemiological tool that can achieve high-resolution analysis of genomic typing and visualizing simultaneously, providing insights into the genetic population structure, evolution analysis, and epidemiological surveillance of H. pylori.
INTRODUCTION
Helicobacter pylori are one of the most sophisticated colonizers in the world that infects more than half of the world's population, ranging from infants to the elderly (Suerbaum and Michetti, 2002). It is a Gram-negative bacterium that normally colonizes the gastric mucosa of humans with about 10-20% infection result in diseases (Pohl et al., 2019;Attila et al., 2020). The typical diseases that have been reported include gastritis, peptic ulcer, mucosa-associated lymphoid tissue (MALT) lymphoma, and gastric cancer (Ernst and Gold, 2000). Globally speaking, the risks of disease and the incidence and mortality of gastric cancer were geographically different (Kodaman et al., 2014).
H. pylori display a distinguished mutation rate among bacterial pathogens due to the lack of genes that initiate classical methyl-directed mismatch repair (MMR) (Alm et al., 1999). The high mutation and recombination rate made H. pylori genomes have enormous plasticity, facilitating this pathogen and enabling it to perfectly adapt to its host (Kang and Blaser, 2006;Didelot et al., 2013). It has been reported that H. pylori in chronic infection could take place through vertical and familial transmission (Schwarz et al., 2008;Ailloud et al., 2019). In within-host evolution, the mutation rate could reach ∼30 single nucleotide polymorphisms (SNPs) per genome per year (Kennemann et al., 2011), compared to Escherichia coli at ∼1 SNP per genome per year (Reeves et al., 2011). Taking into account this occurrence and large recombination events, a simple and efficient way to define the geographical pattern and epidemiological surveillance of H. pylori is needed (Yamaoka, 2009;Jolley et al., 2018).
The transmission of H. pylori transmission is slow, taking place mostly within a household it does not tend to spread like a rapid epidemic (Didelot et al., 2013). Their phylogeny was based on MLST genes and later whole genomes revealed a population structure primarily reflecting early human migration events especially out of Africa 60,000 years ago but not recent spreading (Falush et al., 2003). The global population was split into hp groups, each of which is split into hsp subgroups in the agreed convention. The hpEastAsia includes hspEastAsia, hspMaori, and hspAmerind (Kawai et al., 2011;Montano et al., 2015;Thorell et al., 2017).
To describe the population structure of H. pylori, genetic typing methods such as single gene typing (e.g., cagA, vacA) were recorded in previous studies (Salama et al., 2007;Yamaoka, 2009), while seven-gene multi-locus sequence typing (MLST) became the dominant tool in the later stage due to its simple and rapid typing strategy, which covers genes including atpA, efp, mutY, ppa, trpC, urel, yphC that categorize H. pylori into different sequence types (STs) (Achtman et al., 1999). However, the resolution of seven-gene MLST was still low, which limited us to tracing the epidemiological origins of H. pylori strains (Banerji et al., 2020). Comparatively, SNP typing covers comprehensive core genes that can generate a matrix comprising concatenated SNPs and location information in the genome, which facilitated the newly sequenced genomes to be comparable by mapping and increase the typing resolution.
It has been found that 7-gene MLST are also linked to regional epidemics across the world. The 7-gene MLST typing method enables the regional specific recognition based on the defined STs, in which geographical pattern is linked with the different risks of clinical disease. For example, non-African and African lineage could be associated with different risks of gastric disease (Campbell et al., 2001). Thus, geographic patterns can somehow link to the possibility of clinical disease. However, the seven-gene genotypes of H. pylori are diverse due to the high variability of H. pylori genomes, which hinders the recognition of patterns directly from the sequence types (STs) in 7-gene MLST. In addition, there is no information on geographical patterns or visualization tools for seven-gene MLST, thus such related geographic patterns were hard to find when a new ST was found.
This study describes a H. pylori genomic typing tool, HpTT (H. pylori Typing Tool) that uses the SNP profiling based on whole-genome sequencing data. In addition to genomic typing, HpTT also provides a phylogenetic and geographic visualization tool based on the Nextstrain framework (Hadfield et al., 2018). This tool allows users to upload H. pylori WGS data for genomic typing and uncover possible transmission events of H. pylori. It is believed that this tool can not only improve genome typing resolutions but may also predict the possible origin of the epidemic H. pylori isolates, enabling the global surveillance of H. pylori.
Helicobacter pylori Genomes Downloaded and Filtered in This Study
A total number of 1,654 assembled H. pylori genomes were downloaded from the NCBI RefSeq database (genomes available as of May 4, 2020) using the ncbi-genome-download tool (version 0.2.12). The corresponding metadata of assembled genomes was searched by function using Entrez Direct (version 10.9) (Kans, 2020). By metadata filtering, 1,211 genomes were selected with sample collection location available ( Table 1). All genomes were scanned by mlst (version 2.11) with the library of MLST updated on December 31, 2020 (Jolley and Maiden, 2010).
SNP Analysis
The 1,211 assembled genomes were mapped to the reference genome H. pylori 26695 (GenBank: AE000511.1) (Tomb et al., 1997) using MUMmer (version 3.23) (Kurtz et al., 2004). SNPs were filtered with a minimum mapping quality cutoff at 0.90 across 1,211 assembled H. pylori genomes. 6,129 SNPs were found, and an SNP profile of H. pylori is established for the corresponding isolates.
Phylogenetic Analysis
The maximum likelihood (ML) phylogenetic tree was constructed by iqtree (version 2.0.3) (Nguyen et al., 2015) based on 6,129 SNPs alignments of all 1,211 isolates. The reference genome H. pylori 26695 was used as an outgroup. The tree was generalized by the Gamma distribution to model site-specific rate variation (the GTR model). Bootstrap pseudoanalyses of the alignment were set at > = 1000. All ML trees were visualized and annotated using Figtree (version 1.4.4). The minimum spanning tree was constructed by the GrapeTree (v1.5.0) (Zhou et al., 2018). The mutation rate of the cagA gene was calculated by BEAST v1.8.4 (Suchard et al., 2018).
Geographic Typing System
Based on the phylogenetic tree, two levels of the geographic group were defined, including the first level defined at the continent scale and the second level defined as a country-specific scale. In the first level of genotyping, lineages carrying more than seven isolates and >75% isolates sourced from one major continent were defined as a continent-specific group or clade. A mixed continent group was defined when there was no major continent identified with isolates at >75%. In the second level, lineages carrying more than one isolate and >75% isolates sourced from one major country were defined as a country-specific group or subclade. In addition, a mixed group was also defined at level two when there were more than two isolates and not a major country identified with isolates at >75%. The association of the genomic lineage of H. pylori with the geographic information of isolates provided a map that allows us to trace both the possible transmission and evolution of a detected or sequenced H. pylori genome.
Establishment of Helicobacter pylori Database
The HpTT website was established based on two modules: (1) The genomic-geographical typing tool of H. pylori isolates and (2) a visualization tool of both the genomic and geographic typing results. The online typing tool was written in PHP, Javascript, css, and HTML. The online visualization service was performed based on the CodeIgniter framework 1 , tree visualization was analyzed by the augur 2 bioinformatics tool and the auspice 3 visualization tool imbedded in the Nextstrain (Hadfield et al., 2018) open source project. The H. pylori database was stored in a Mysql database.
Definition of Two Levels of Geographic Genotypes for Helicobacter pylori
A total of 1,211 assembled genomes with available geographic information from the NCBI RefSeq database were downloaded and analyzed for establishing the H. pylori genotyping database (Supplementary Table 1). All assembly genomes were mapped to the reference genome H. pylori 26695. Based on the maximum likelihood tree, 6,129 SNPs extracted from 1,135 genes on the reference genome were defined for further genomic typing. In terms of geographic information, 1,112 isolates were grouped at two levels, including 37 continent-level groups (Figures 1A,B) and/or 236 country-level groups (Figures 1C,D). The median pairwise distances (the median number of SNPs shared by the branches) between isolates were found as follows: 319 SNPs within continent clades and 1,493 SNPs within country subclades. We labeled these continent clades and country subclades using a structured hierarchical nomenclature system similar to that used for M. tuberculosis (Coll et al., 2014). For instance, region 1 clade (G1) is subdivided into country subclades G1.C1 and G1.C2. The mutation rate of cagA was 2.413 × 10 −2 (95% CI: 1.600 × 10 −2 -3.900 × 10 −2 ), which was 1.739 × 10 −2 /site/year (95% CI: 1.153 × 10 −2 -2.811 × 10 −2 ).
A Continent Level Genomic Typing for
Helicobacter pylori A total number of 37 continent level groups (n = 1,112) were defined, including 25 continent-specific groups and 12 mixedcontinent groups (Figures 1A,B). Isolates across the tree did not fall into the continent group but can be defined as a country group that was named G0 (n = 74). Isolates across the tree that fell into neither fall into the country group nor the continent group were defined as non-grouped (n = 25). Because the genome data of H. pylori were downloaded from the NCBI database, and these genomes came from various regions of the world. Compared with their ancestors, these strains have different genomes, which has led to the formation of independent evolutionary branches.
After they formed independent evolutionary branches, (1) they may not have spread.
(2) After the spread, it was not collected. These two reasons could account for an insufficient number of strains in the branch, which cannot form a group with regional characteristics under our typing method. Five continent-specific groups contain more than 75% Asian isolates, supporting Asia to be the continent with the largest isolate source (n = 319, 26.34%) (Figure 2). North America was found to be the second-largest group of isolate pool which consisted of six continent-specific groups (n = 132, 10.90%). Although fewer isolates were found to be sourced from Europe (n = 109, 9.00%), these isolates were distributed in nine continent-specific groups. Two groups (G16 & G29) of isolates were found to be part of the Oceania specific group (n = 39, 3.22%) and three groups (G1 & G26 & G35) were found to be from the South America specific groups (n = 109, 9.00%). In addition, the 12 mixed groups of isolates contained 226 isolates (18.66%). Among all G level groups, G2 was the largest continent specific group (n = 223) that mainly contained isolates from Asia (193/223, 82.83%), while G35 was the second largest continent specific group (n = 109) that mainly contained isolates from South America (99/109, 87.61%). Apart from all the continent groups above, there was no Africa-specific group found, but only with isolates collected from Africa defined in G28 (n = 2), G37 (n = 7), and G29 (n = 1) (Figure 2).
Although the continent-specific groups did not 100% stick to one continent in our typing system, the transmission events were still possible to predict. While most of the Asian isolates fell into the Asia groups, a small proportion of the Asian isolates belonged to the mixed groups. Similarly, most of the isolates sourced from North America and South America fell in their own region groups, while a minority of the isolates were in the mixed groups. Interestingly, isolates from Oceania and Europe could be found across all 12 mixed continent groups, reflecting the fact that H. pylori isolates from these two continents were relatively widespread across the globe.
The Nested Country Level Genomic Typing for Helicobacter pylori A total number of 859 isolates were grouped into 216 geographic patterns at a country level, which were predominant in 29 countries across six continents (Figure 3). Among these 29 countries encompassing 216 groups, 20 countries found in 168 groups were defined as country-specific groups, while the remaining 9 countries were scattered over the 48 country-level mixed groups that were left.
G35.C07 was the largest country-specific group that contained 49 isolates from Colombia, followed by the G35.C05 (n = 35) dominated in Colombia as well. These isolates from Colombia were mainly collected from the NCBI Bioproject PRJNA352848, which study contained the population structure of H. pylori in regional evolution in South America (Muñoz-Ramírez et al., 2017). The isolates from groups G35.C07 and G35.C05 were mainly found in Colombia, Mexico, and Spain (Figure 3). This result provided evidence that the H. pylori isolates were possibly transmitted from Spain and spread locally in South America and North America. In comparison, Australia and Switzerland were the largest countries of isolate sources with isolates scattered across more than half of the country-specific groups.
When comparing the percentage of isolates from different countries, those isolates from France, Germany, Malaysia, Nicaragua, Sweden, and the United Kingdom were found to be scattered in more than one continent group, while isolates from Cambodia, Colombia, India, Peru, Spain, and the United States were focused in one continent group when they were also found in other continent groups. More importantly, Australia and Switzerland were two countries that were mostly found to have scattered isolates in different regional specific groups.
Three clusters were observed in the percentage of different isolate sources at continent scale (G32 to G25 with red branches in Figure 3), consisting of groups from Europe and mixed continents. Specifically, those isolates from mixed groups were mainly sourced from European and Oceania countries, making this cluster dominated by Europe-Oceania. The second cluster was the mixed by Asian, Oceanian, European, and mixed groups (G4 to G2 with green branches in Figure 3) but dominated by isolates from Australia and Asian countries. Therefore, cluster two was specified as the Asian-Pacific cluster. The third cluster was formed by North American groups (G31 to G37 with purple branches in Figure 3), while South American branches were next to the North American cluster.
Comparing With hp and hsp Class
hp and hsp class were designed for the geographic-genetic typing of H. pylori (Kawai et al., 2011;Montano et al., 2015;Thorell et al., 2017;Lamichhane et al., 2020). Of 1,211 H. pylori genomes, 231 were found to have been typed by hp and hsp class, which were well fit to our typing groups. Specifically, hpEastAsia, hpAsia2, and hspEAsia were included in the three Asia continent groups G2, G3, and G4 (Figures 1E,F), while hspEuropeColombia fell in two south America groups, G26 and G35. Similarly, hpAfrica1, hspMiscAmerica, and hspAfrica1NAmerica were mapped to a mixed group G37. The comparison with hp and hsp clusters enhanced the validity of our typing method.
Comparing With Seven-Gene MLST
Seven-gene MLST was implied to get the sequence types (STs) for all 1,211 isolates. Unfortunately, due to the high mutation rate of the H. pylori strains, most of the seven-gene allele were only found to have high similarity instead of an accurate type, as a result, a large number of isolates (n = 876, 72.3%) were untyped in our dataset (Supplementary Table 1 and Figure 1). Among all the countries, Australia and Switzerland were the two countries with a higher number of untyped isolates, which is probably due to the isolates being collected by those two countries having not been submitted to the pubMLST website to be typed.
A User-Friendly Typing Website
To support our H. pylori geographic typing tool, a userfriendly typing website was established and made available at https://db.cngb.org/HPTT/. Our HpTT approach is compatible with any whole-genome sequencing (WGS) data with metadata (Figure 4). For the sequencing data from pure-cultured isolates, the assembled genomes can be directly submitted to our website. However, it is worth noting that sequences or assembled genomes needed to be extracted from metagenome samples before submission (Parks et al., 2017;Olekhnovich et al., 2019). Except for the sequenced genome data, the available assembled FIGURE 2 | Geographical clustering of H. pylori continent clades. The number in each cube represents the percentage of unique isolates sourced from each of the continents. A total number of 37 continent-level groups were defined. The deeper the color, the higher the percentage of the isolates in that continent level of clade groups. A phylogenetic tree is also shown on the left side of the table. Background information on the isolates is provided in Supplementary Table 1. contigs from NCBI Sequence Read Archive (SRA) or assembly database (RefSeq), or other genome databases (e.g., European Nucleotide Achieve) can also be directly uploaded to our website. By using MUMmer alignment and blast process, the uploaded genome can be located to the closest matching genomes, further facilitating the possible transmission route analysis across the globe. In addition, our database can be also linked to the NCBI genome database, helping the user easily locate the metadata information from the available database (see Supplementary Material).
Except for the typing tool, the Nextstrain framework was also embedded in our website. By clicking the uploaded genome number, information can be linked to the phylogenetic tree with the corresponding continent and country. Possible evolution relationships and interactive located functions have made our typing tools easy to be applied and understood.
The Validation of Our Genomic Typing Method
For validating the accuracy of the genomic typing method and the efficiency of the web tool, ten new genomes from NCBI were downloaded and tested (Supplementary Table 2). Except for one genome (GCF_002206465.1), which failed due to being sequenced by Pacbio, the remaining nine genomes were typed successfully [Our typing tool was established based on the MPS (Massive Parallel Sequencing) data, Pacbio sequencing may generate many SNPs in the gap region in MPS sequencing]. FIGURE 3 | Geographical clustering of H. pylori country subclades. The number in each cube represents the percentage of unique isolates sourced from each of the countries in that continent group. A total number of 216 country-level groups were defined. The deeper the color, the higher the percentage of the isolates sourced from that country in continent-level groups. Background information on isolates is provided in Supplementary Table 1.
DISCUSSION
The epidemiological patterns of H. pylori isolates have been reported with specific geographic characteristics. In this study, the new typing webtool HpTT not only illustrated the population structure of H. pylori but also made genomic typing easy to approach. In the continent level of typing, 1,112 isolates were grouped into 37 continent-specific patterns. Except for 12 continent mixed groups, the rest could be defined as continentspecific groups across the five continents. Isolates from Europe and Oceania were universally found in most of the continentlevel groups (Europe 33/37, 89.19% and Oceania 26/37, 70.27%), illustrating that isolates from these two continents were widely spread across the world.
In the country level of typing, 1,045 isolates were grouped into 216 country-level groups. Most of the isolates were defined as country-specific groups (168/216, 77.77%), while the rest of the isolates were grouped as country mixed groups (48/216, 22.22%). Australian and Swiss isolates were found to be widespread around the world, while isolates from Columbia were more regionally specific. It has been reported that H. pylori in South America were originally transmitted from Spain (Muñoz-Ramírez et al., 2017), this data perfectly aligned with our results in G35.C05 and G35.C07, giving support to the accuracy of our genomic typing method.
The phylogenetic tree in this study was built by the collection of H. pylori genomes downloaded from the NCBI Refseq database. Ideally, all the isolates would be able to be grouped into different geographic groups, but there are still a few isolates that cannot be grouped by our typing tool due to the following reasons: (1) They have not spread after forming independent evolutionary branches, (2) After spreading, their offspring have not been collected and sequenced.
H. pylori show high and fine (∼40 bp patch) intergenic recombination (Bubendorfer et al., 2016), which leads to sharing patches of genome sequences and makes the phylogenetic relationship obscure. Special methods have been developed to infer a population structure based on this sharing (Yahara et al., 2013). Although such typing methods are built based on core SNPs that cannot accurately trace the origin of the isolates comparing to a recent comprehensive study of H. pylori (Muñoz-Ramírez et al., 2021), we established a simple, rapid, and user-friendly genetic-geographic typing tool in the population structure description. The core SNPs of FIGURE 4 | The HpTT workflow. The SNP-based genotyping approach can be used with the Whole Genome Sequencing (WGS) data, which can be acquired in the following ways: DNA can be extracted from a pure cultured bacterial cell with WGS data or a community sample with metagenomic sequencing data. After being sequenced by an appropriate platform, the assembled genomes can be directly submitted to our database. In addition, the public assembled data can also be directly submitted to our database. The downstream analyses of the aligned sequence data can be linked to the phylogenetic and geographic page.
1,211 H. pylori genomes were filtered with a minimum mapping quality cut off at 0.90, which means the individual indels for isolates were not kept. Our typing method has been further validated by testing genomes, suggesting that the typing tool was successfully established.
The addition of 7-gene MLST to our database intended to offer an easy way for users to visualize both results from our typing method and 7-gene MLST with comparisons. The large set of untyped isolates in 7-gene MLST might be related to the insufficient submission of genomes to pubMLST. In our typing database, isolates collected from Australia and Switzerland were scattered across different regional groups, which might be due to the frequent transmission event that occurred between Australia/Switzerland and other countries.
In this study, except for the novel typing tool, a user-friendly website was also established. By using this typing tool, users can achieve fast and precise genomic typing, easily locating the possible origins and transmission events across the world. When located in the actual geographic group, it is easy for users to check the details of the corresponding components of the branches in our database. The genome with the highest identity can be easily linked to the NCBI database as well as the visualization tool where the dynamic evolution of H. pylori was shown. At the same time, seven-gene MLST results were displayed for each genome in the database, as well as the hp groups and hsp subgroup results studied previously (Kawai et al., 2011;Yahara et al., 2013).
The most interesting part of the HpTT tool and methodology allows us to perform genome typing with assembled genomes from the metagenomics samples, as illustrated in Figure 4. Due to the rapid mutation of H. pylori, it is most likely that the sample from one's gut is heterogeneous. Whole-genome sequencing by combining sequencing libraries labeled with different barcodes on a meta sample, and a cultured pure isolate could yield enough data from one single run to perform the epidemiological surveillance of H. pylori on a global level to find the possible transmission event in evolution profile. An open-source assay protocol will be developed and shared in the future to combine with this HpTT tool to enable the epidemiological surveillance of H. pylori.
Although our typing tool filled a gap in the genetic epidemiological surveillance of H. pylori, some functions still need to be improved. For example, cytotoxin-associated gene A (cagA) and vacA were two crucial genes that were reported to be correlated with geographic patterns of H. pylori (Yamaoka, 2009;Breurec et al., 2011). The cagA gene is one of the most important virulence genes in H. pylori, located at the end of a cag pathogenicity island (cag PAI) that encodes 120-145 kDa CagA protein (Šterbenc et al., 2019). Another virulence factor was vacuolating cytotoxin encoded by the gene vacA (Šterbenc et al., 2019). The variation of these two genes was widely reported by the H. pylori groups that can reflect the genomic difference for different geographic patterns. However, such a rapid typing method on a website for these two genes is still lacking, which could be considered in the further HpTT version 2.
H. pylori are normally treated by antibiotics without antimicrobial susceptibility testing (Pohl et al., 2019).
Antibiotics-resistant H. pylori has been reported related to several mutations within the genes pbp1A, 23S rRNA, gyrA, rdxA, frxA, and rpoB (Domanovich-Asor et al., 2021). These antibiotics-resistant genes will be included in the second version despite there already being an antibioticsspecific resource available (Yusibova et al., 2020). As more strains or isolates are being deposited into our database along with geographic information, HpTT could be more powerfully associate genomic typing with geographic information and phenotypes.
In summary, this work illustrates efforts in a global epidemiological study of H. pylori isolates. Two functions were designed for the web typing tool, one for genomic typing and the other for phylogenetic and geographic visualization. The accuracy of our genomic typing system was proved by ten unused genomes as well as in another published study (Muñoz-Ramírez et al., 2017). Together with the visualization tool, the genomic population structure of H. pylori with geographic documents were described. Future studies will be expanded by the crucial virulence gene and antibiotic-related genes. This tool is beneficial for the surveillance of H. pylori for public health and the monitoring of its epidemic development.
DATA AVAILABILITY STATEMENT
All assembled H. pylori genomes used in this study were downloaded from NCBI assembly database (https://www. ncbi.nlm.nih.gov/assembly/) under the accession numbers in Supplementary Tables 1, 2.
AUTHOR CONTRIBUTIONS
ZX and HT conceived the study. XJ performed the analysis. TZ, YL and WL revised the manuscript. HT provided critical analysis and discussions. All authors discussed the results and contributed to the revision of the final manuscript.
FUNDING
This study was supported by the Science, Technology, and Innovation Commission of Shenzhen Municipality under a grant (No. JCYJ20170412153155228).
ACKNOWLEDGMENTS
We thank China National GeneBank at Shenzhen for supporting this study. We wish to thank Daoming Wang by the timely help of genome downloading and analysis. | 6,428.8 | 2021-03-30T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Computer Science"
] |
Intelligent detection method and applied research of diabetic retinopathy based on residual attention network
Diabetic Retinopathy (DR) is a late - stage ocular complication of diabetes. Proposing a high - accuracy automatic screening technology of fundus images based on deep learning is of great significance to delay the deterioration of DR. In this paper, we propose an end - to - end framework RAN for DR classification and diagnosis based on the ResNet, attention mechanism and dilated convolution was added to the framework. We implemented experiments on three DR datasets, Kaggle, Messidor and IDRid, analyzed and compared the experimental results. The focal loss function is added to solve the imbalance problem between DR datasets. The results show that the method RAN used mainly improves the results of the basic neural network when using the same dataset. Therefore, by optimizing the basic neural network, the classification and diagnosis effect of DR can be improved.
prevalence of obesity worldwide, the future prevalence of diabetes will continue to rise, and the burden of diabetes will also increase [1] . Diabetes is associated with life, the longer it is discovered and the prolonger diabetes is, the higher the risk of complications is. Eventually, complications of diabetes can be disabling and even life-threatening.
Diabetic retinopathy (DR) is a late manifestation of diabetes, and one of the most severe complications of diabetic microangiopathy. If it is not detected and treated early, it will cause irreversible visual impairment, and in severe cases it may cause blindness. Fundus image is an vital inspection method for early detection of DR lesions. Due to many ophthalmologists in the less developed areas are lacking, patients with diabetes lack early diagnosis and treatment of DR. Therefore, computerized screening technology based on fundus images is of great significance to delay the deterioration of DR.
No obvious retinopathy No abnormality Nonproliferative DR, Mild
Microaneurysms only Moderate Besides microaneurysms, there are still a few hard exudative spots or small bleeding spots. Severe There are no signs of proliferative DR, but besides moderate lesions, there is still one of the following three (4, 2, 1 regulation): More than 20 retinal vein beads in four quadrants, Two quadrants have clear retinal vein beads, One quadrant has obvious IRMA. Proliferative DR One or more of the following changes: 1. Neovascularization 2. Preretinal bleeding 3. Vitreous blood. [2] Table 1.2 Clinical classification of diabetic macular edema (DME) [2] DME Level Fundus Examination
Figure 1.2 Schematic diagram of 5 grades of authentic clinical diabetic retina images
No obvious DME No noticeable thickness of retina or hard exudate at the posterior pole. There is obvious DME A significant thickness of retina or hard exudate in the posterior pole. Mild DME The thickness of retina or hard exudates away from the fovea. Moderate DME The thickness of retina or hard exudates does not affect the fovea. Severe DME The thickness of retina or hard exudates affects the fovea.
High-quality color retina images can assist doctors in the diagnosis and judgment of retinopathy. However, the diagnosis of DR requires a clinically experienced ophthalmologist, and DR screening has not carried out in most grass-roots areas, which has significantly increased the risk of blindness due to diabetes [3] . Therefore, computer-assisted remote diagnostic technology in fundus images can effectively reduce the visual impairment of diabetic patients caused by insufficient medical resources. This study intends to use deep learning (DL) methods to process the fundus images, laying the foundation for the remote automatic fundus image screening system.
Related Work
At present, most of the work in the field of ophthalmology image analysis focuses on the DR classification, segmentation, and detection of retina structures, such as optic disc, macular, blood vessel, abnormal parts (hard osmosis, soft osmosis, bleeding spots, microaneurysms), respectively. Rahim et al. [6] presents an automatic detection method of diabetic retinopathy and maculopathy in fundus images by employing fuzzy image processing techniques. A combination of fuzzy image processing techniques, the circular hough transform, and several feature extraction methods are implemented.
Eftekhari et al. [7] used a two-step process and two online datasets to train CNN, which can solve the problem of imbalance and reduce training time while accurately detecting. Seth et al. [8] used convolutional neural networks and linear support vector machines to train the network on the benchmark dataset EyePACS dataset.
Experimental results show that the model has high sensitivity and specificity in detecting diabetic retinopathy. Dutta et al. [9] proposed an automatic knowledge model to identify critical prerequisites for disaster recovery. After testing using a CPU-trained neural network model, three types of back-propagation neural networks were used. The model was able to quantify the characteristics of different types of blood vessels, exudates, bleeding, and microaneurysms. Benzamin et al. [10] proposed a deep learning algorithm based on CNN, which can detect hard exudates in fundus images and assist ophthalmologists in diagnosis. Adem et al. [11] improve the accuracy of model diagnosis, and assist clinicians in their work. [14] VggNet、 GoogleNet DR1 Messidor Sensitivity 97.11%,Specificity 86.03% Accuracy 92.01%,AUC0.9834 Gargeya et al. [15] Data driven DNN ResNet
Research Status of Deep Learning Methods
Based on the previous deep learning methods, the Residual Attention Network proposed in this paper is mainly comprises of an encoder, a residual attention module, and dilated convolution.
Encoder
The primary function of the encoder is to extract image features with high-level semantic information. Generally, the deeper the network, the stronger the ability to extract features. But when the network increases to a certain depth, the problem of gradient disappearance will occur, which leads to the degradation of network performance. ResNet [23] solves this problem through residual connection, which can make the network deeper, and its ability to extract features is more stronger. It is a structure designed based on VGG [24] . The biggest part is adding a layer jump connection structure to achieve residual learning and increase identity mapping, making the depth of the network play a role.
From an intuitive perspective, the residual learning needs less content, and the learning difficulty is low. The residual unit can be expressed as: The learning characteristics from shallow l to deep L are expressed as: According to the chain rule, the gradient of the reverse process can be expressed as: (3)(4) lo
Residual Attention Module:
The attention mechanism in computer vision is an imitation of human visual attention actually. The principle is that human brain can find the target area, and give more attention to the area, while assigning to the surrounding unimportant areas less attention, so as to obtain more useful information and suppress other useless information. In traditional image processing methods: salience detection, image feature extraction, and sliding window methods can also be regarded as attention mechanisms. The attention mechanism in deep learning also mainly includes two parts: learning weight distribution (different parts of the input image or feature map have different weights), task focus (divide the task, design different sub-networks, and focus to different subtasks, redistribute the learning ability of the network).
As shown in Figure 3.2, the attention guided module (AGM) is composed of two 1x1 convolution layers with different activation functions in the adaptive average pooling layer. The specific operation is as follows: First, the input feature map passes through an adaptive average pooling layer, and the output feature map dimension is 1×1× ; then, after a 1x1 convolution layer with ReLU activation function, the output feature map dimension is 1×1× / , and the number of channels It is reduced from C to C/r; then, after a 1x1 convolution layer with sigmoid activation function, the number of channels is expanded from C/r to C, and a channel descriptor with dimension 1×1× is obtained to recalibrate the original feature map. Among them, the hyper-parameter r can control the calculation amount of the AGM, which is set to 16 in the experiment. Finally, by multiplying the obtained channel descriptor and the input feature map, the recalibration of the feature map can be completed, and the importance of each channel can be recalibrated by integrating global information. The importance of different channels is different, which highlights important information and suppresses background information.
Figure 3.2 Attention guided module
In this DR classification experiment, the areas such as hard exudates, cotton velvet spots, bleeding films, and microaneurysms in the fundus image are the areas of focus [25] . The methods in the neural network can increase the abnormal area information of the lesion and suppress other background information, which can improve the accuracy of the model in the DR classification task.
Residual attention module structure, as shown in Figure 3.3, based on ResNet, the method of stacking attention structure to change the attention of features, as the network deepens, the attention mechanism module will make adaptive changes [26] . In each attention mechanism module, upsampling and downsampling structures are added. Each attention mechanism module is divided into two branches, as Figure 3.4 shows, the soft mask branch (attention mechanism branch) and the trunk branch (original branch). The formula of the attention mechanism is: T represents the main branch and M represents the mask branch. The mask branch uses several maximum pooling to increase the receptive field. After reaching the minimum resolution, a symmetric network structure is used to amplify the features back. (such as residual modules and inception modules), and can be easily connected to other networks to achieve a plug-and-play effect. By stacking this residual attention structure, the advantages of residual learning and attention mechanism can be thoroughly combined to achieve better results. [27] :
: Dilated Convolution Module
In order to expand the receptive field, this article also introduces a cavity convolution module. Deep features have high-level semantic information but lost resolution; shallow features have high-resolution, but the semantic level is low. The dilated convolution can expand the receptive field of the network without reducing the resolution in the case of 2 dimensions. The dilated convolution can be expressed as: ( [3][4][5][6] In the formula, [ ] represents the output feature map, [ ] represents the input feature map, d represents the dilation rate, [ ] represents the k − th parameter of the convolution kernel, and K represents the size of the convolution kernel. As shown in Figure 3.5, the dilated convolution is equivalent to filling d-1 dilations between adjacent convolution kernel parameters. When the dilation rate d=1, the dilated convolution degenerates into a standard convolution, the larger d, the larger the receptive field of the convolution kernel. In this article, 1x1 standard convolution, 3x3 dilated convolution with dilation rate d=2, 3x3 dilated convolution with dilation rate d=3, 3x3 dilated convolution with dilation rate d=5, and global average pooling is used to extract features. Five levels of image information are extracted. The specific process of using global average pooling to extract features, is to use an adaptive average pooling layer to generate a 1x1x512 dimension feature map first. Second, use 1x1 convolution to change the number of channels to 256, and then use the bilinear interpolation algorithm to expand its size to 14x14. Third, the extracted feature maps of 5 levels are spliced with the original feature maps to obtain a 14x14x1792 dimension feature map, and finally use 1x1 convolution to change the number of channels to 512. After each convolution operation, there is a batch normalization layer (BN) and a ReLU activation function. Before each dilated convolution extracts features, the feature map perform a padding operation to ensure that the resolution of the feature map before and after does not change.
Loss Function
The loss function in the neural network is used to measure the gap between the predicted value obtained by the model and the actual value of the data, and it is also a standard used to measure the generalization ability of the model. The smaller the loss function, the better the performance of the model, and the loss function used by different models are generally different.
3.5.1 Cross Entropy [28] In this experiment, the classification module performs the main task of DR classification. The goal of the classification task is to predict the label category of each input image. The most commonly used loss function is cross entropy. Cross entropy is also known as log-likelihood loss, logarithmic loss, and is also called logistic loss in the two-class classification. To describe the difference in probability distribution, the formula is: represents the original image label,̂ is the classifier predicting similar values. Simultaneously, represents the weight value in the classification module.
Focal Loss
Since the imbalance problem generally exists in the DR datasets, focal loss, designed to solve the imbalance problem is introduced in this experiment. It is modified on cross entropy, and multiplies the original cross entropy by an index that weakens the contribution of the easily detectable object to the model training. So that focal loss successfully optimizes the imbalance problem between positive and negative samples, and relieves the problem that object detection loss are easily affected by a large number of negative samples. Focal Loss is defined as: γ is the focus parameter, γ ≥ 0. (1 − ) is called modulating factor, the purpose of adding modulation coefficient is to reduce the weight of samples that are easy to classified, so that the model focused more on the samples that are difficult to classified during training.
Focal loss has two important properties:①When a sample is wrong, is very small, then the modulation factor (1 − ) is close to 1, the loss is not affected; when → 1, the factor (1 − ) is close to 0, then the weight of the better sample is reduced. Therefore, the modulation coefficient tends to 1, which means that there is no significant change from the original loss. ②When γ = 0, focal loss can be written as cross entropy, and as γ increases, the modulation coefficient also increases.
3.6 Transfer Learning [30] Transfer learning is a method of machine learning, which is to transplant the model obtained from one task training to the training of other tasks. Affected by transfer learning, in the case of insufficient training data, by loading the pre-trained EfficientNet weights on the ImageNet dataset, the model has a better weight initialization before starting to optimize the gradient, so as to train your own model.
Considering the huge difference between the fundus image dataset and the ImageNet dataset, the training of the network layer during the experiment is restarted from each layer. Because the amount of abnormal pictures in the Messidor dataset and IDRid dataset is too small, it is more meaningful for clinical application to do two-class classifications. It can also be seen from the above dataset that the most prominent feature of medical images is the imbalance distribution of data, that is, the number of samples in normal images is much higher than that of abnormal images, and the amount of grading data with the severity of the disease is getting less and less. To solve this problem, the most commonly used method is data enhancement, to expand the lesion sample. In addition, improving the loss function or improving the network structure is also a widely used optimization method.
Materials and Approach
In the above three datasets, we randomly selected 60%, 15%, and 25% of the images in each dataset as the training set, validation set and test set.
Image Preprocessing
Since all the widely used DR public datasets have the problem of severe imbalance in data distribution, and image preprocessing is used in this experiment to increase the amount of data. The purpose of image enhancement is to process the acquired images so that the features of interest have better contrast and visibility. By
Data Augmentation
Commonly used image-enhancing methods, translation, rotation, cropping, scaling, noise addition, affine transformation, etc., usually do not change the type of object, are the earliest and most widely used type of image-enhancing method.
Another way is to change the color. We can change the color of the image from four areas: brightness, contrast, saturation, and tone. In practical applications, multiple image-enhancing methods are usually superimposed, as shown in Figure 4.3. In addition, due to limited computer performance, the image is first scaled to 224x224 pixels and then sent to the network for training and testing.
Implementation Details
The experiment uses the PyTorch deep learning framework and OpenCV image processing library, implemented on Ubuntu16.04 operating system, GeForce GTX 2080Ti graphics card, Adam optimizer initial learning rate is 0.001, the batchsize training phase is set to 8, the testing phase is set to 1, and a total of 60 epochs are trained. The test set is tested after every epoch of training, and we only output models and results with the highest sensitivity and accuracy values.
Evaluation Index
In this experiment, the relationship between the model prediction result and the true label of the data is evaluated by the following criteria (as shown in to be negative. The calculation formula for the SE and SP is: The higher the SE, the greater the probability of a DR image being diagnosed, the higher the SP, the greater the probability that a normal image is predicted to be normal. In clinical applications, the missed diagnosis has a greater adverse effect on patients, so the SE in DR classification is more significant. ACC represents the probability of correct classification of all samples, the calculation formula is as follows:
Results and Discussion
In this paper, on the three DR datasets of Kaggle, Messidor, and IDRid, the commonly used deep learning methods and our proposed RAN are used respectively, and the loss function uses cross entropy and focal loss to perform classification and diagnosis experiments for DR and DME, then compared and analyzed them. The results are as follows.
Messidor Results
It can be seen from As can be seen from Figure 5.1-5.3, because of the imbalance problem in the DR datasets, in each classification task, using focal loss as the loss function is more suitable than cross entropy as the loss function, accuracy has been greatly improved.
Figure 5.4 Visualization of Grad-CAM [39] DR classification
In addition, we also used Grad-CAM to visualize the attention heat map during the fundus image DR classification process. As shown in Figure 5.4, we can clearly see that the optimization method is more focused on abnormal parts than the basic neural network structure.
The above experimental results show the intense competitiveness of CNN in clinical diagnostic applications, and RAN has achieved good results in completing the DR classification task. The image-enhancing method used in this experiment can make the amount of data in each classification of DR reach a relatively balanced state, and the loss function optimization method has also achieved satisfactory results in alleviating the problem of data imbalance. The model added an attention mechanism, which can pay more attention to the features in the fine-grained image during classification, and play an active auxiliary role in the feature extraction of the network.
Using optimization methods such as dilated convolution can also improve the results of the neural network. In short, using our RAN can enhance the accuracy of DR classification and diagnosis on most fundus images.
Conclusion
This paper proposes a classification algorithm, Residual Attention Network (RAN), combining attention mechanism and dilated convolution for diaetic retinopathy (DR) detection. The classification effect of the model is verified on Kaggle, Messidor and IDRid competition data. Since the imbalance between data categories will lead to overfitting during model training, data augmentation and focal loss are used. Aiming at the problem of minor differences between DR categories, we performed a series of preprocessing on the original retinal image to make the bleeding and exudation in the fundus image more obvious. Then, an attention mechanism is added to the network to extract features of fine-grained images, so that the network can better distinguish the differences between the types of lesions, and we also used dilated convolution in the network to increase the receptive field. Through this combination of residual network designed based on ResNet, attention mechanism and dilated convolution, the accuracy of the classification task of diabetic retinopathy can be improved. However, the increase in accuracy of this method is not significant enough. Therefore, in the future work, we will integrate the prior knowledge of age, blood glucose, blood pressure, intraocular pressure, and past history into the DR classification model to integrate more information related to disease and effectively improve the diagnosis effect. In addition, multi-task experiments will mutually promote the improvement of experimental results. How to integrate the results of optic disc, macular detection, and blood vessel segmentation into the DR classification model will also be the focus of future work. It is the common aspiration of algorithm engineers and clinicians to build a robust and accurate deep learning model for DR screening. This desire cannot be achieved without the joint efforts and cooperation of both parties.
Disclosures. The authors declare that there are no conflicts of interest related to this article. | 4,880.4 | 2021-06-24T00:00:00.000 | [
"Computer Science"
] |
Human β-defensin 3 gene modification promotes the osteogenic differentiation of human periodontal ligament cells and bone repair in periodontitis
Efforts to control inflammation and achieve better tissue repair in the treatment of periodontitis have been ongoing for years. Human β-defensin 3, a broad-spectrum antimicrobial peptide has been proven to have a variety of biological functions in periodontitis; however, relatively few reports have addressed the effects of human periodontal ligament cells (hPDLCs) on osteogenic differentiation. In this study, we evaluated the osteogenic effects of hPDLCs with an adenoviral vector encoding human β-defensin 3 in an inflammatory microenvironment. Then human β-defensin 3 gene-modified rat periodontal ligament cells were transplanted into rats with experimental periodontitis to observe their effects on periodontal bone repair. We found that the human β-defensin 3 gene-modified hPDLCs presented with high levels of osteogenesis-related gene expression and calcium deposition. Furthermore, the p38 MAPK pathway was activated in this process. In vivo, human β-defensin 3 gene-transfected rat PDLCs promoted bone repair in SD rats with periodontitis, and the p38 mitogen-activated protein kinase (MAPK) pathway might also have been involved. These findings demonstrate that human β-defensin 3 accelerates osteogenesis and that human β-defensin 3 gene modification may offer a potential approach to promote bone repair in patients with periodontitis.
INTRODUCTION
Periodontitis, a chronic inflammatory disease induced by dental plaque, damages the integrity of tooth-supporting tissues. 1 It can disturb normal bone metabolism and eventually result in alveolar bone loss. 2 According to epidemiological investigations, at least one-half of the world's population suffers from periodontitis, and it has become the eleventh disease among global diseases that cause short-or long-term loss of health. 3 Poor periodontal condition is a major problem that affects the oral health of people in China. 4 Inflammatory responses and bone loss due to periodontitis have become the most critical and challenging problems to be solved for individuals to achieve healthy periodontium. 5,6 Human β-defensin 3 (hBD3), a small molecule cationic antimicrobial peptide consisting of 45 amino acids, is thought to be among the most promising antimicrobial peptides because of its broad-spectrum antibiotic activity. 7,8 In addition, hBD3 exhibits diverse functions in host defences, including in immune regulation and inflammatory processes. 9,10 It was reported that hBD3 has significant anti-inflammatory activity in the host by regulating Tolllike receptor signalling pathways. 11 Kiatsurayanon et al. 12 found that hBD3 could regulate the innate immunity of skin by increasing the expression of several claudins, inducing claudin localization along cell-cell borders and reducing the paracellular permeability of keratinocyte layers. Moreover, hBD3 also plays a pivotal role in cell proliferation and differentiation. hBD3 potentially promotes the proliferation of periodontal ligament (PDL) fibroblasts 13 and the osteogenic differentiation of osteoblast-like human osteosarcoma cells (MG63 cells). 14 The application of hBD3 in periodontitis treatment has been studied for years. 15, 16 Bedran et al. 17 showed that hBD3 had antiinflammatory activity in a three-dimensional (3D) coculture model of gingival epithelial cells and fibroblasts. Our team previously demonstrated that recombinant hBD3 inhibits periodontitis development by suppressing inflammatory responses in macrophages and modulates macrophage activation during the acute inflammatory response to Porphyromonas gingivalis lipopolysaccharides (LPS). 18,19 In an in vivo study, transplantation of periodontal ligament cell (PDLC) sheets expressing hBD3 promoted bone repair and osteocalcin (OCN) expression in periodontal tissues. 20 In addition to the its antimicrobial, anti-inflammatory and immune regulation effects, hBD3 affects cell differentiation during these processes. PDLCs derived from the PDL possess stem cell-like attributes and have excellent potential for periodontal regeneration. 21,22 Thus it is reasonable to hypothesize that increasing hBD3 expression in PDLCs may contribute to periodontal regeneration.
Genetic engineering technology has been widely used to transfer relevant exogenous genes into a host to produce valuable proteins or peptides, such as hBD3. By gene transfection, we can achieve more stable and sustained expression of target proteins. 23 Our team has successfully modified the hBD3 gene and produced a stable level of hBD3 in hBD3-engineered human PDLCs (hPDLCs). Hence, we investigated whether this hBD3 gene modification promotes the osteogenic differentiation of hPDLCs.
The process of osteogenic differentiation was proven to be modulated by various signalling pathways. 24,25 Mitogen-activated protein kinases (MAPKs) act as prominent intracellular enzymes and can be activated by extracellular stimuli, enabling cells to participate in various activities, including apoptosis, neutrophilmediated inflammatory processes, wound healing and tissue remodelling. 26 Among all the MAPK pathways, the p38 MAPK pathways seems to be the most closely related to osteogenesis. Lee et al. 27 reported that osteoblast differentiation could be enhanced by berberine through the p38 MAPK-Runx2 pathway both in vitro and in vivo. Furthermore, it was proven that the p38 MAPK pathway also modulates the proliferation and osteogenic differentiation of human bone-derived marrow mesenchymal stem cells. 28 There are few reports about hBD3 and osteogenesis; however, other cationic antimicrobial peptides, such as LL37, have been found to affect the proliferation and differentiation of MC3T3-E1 cells 29 and to enhance bone regeneration in a rat calvarial bone defect through the p38 MAPK pathway. 30 Therefore, we hypothesized that the p38 MAPK pathway might also modulate the osteogenic process of hBD3 gene-modified hPDLCs.
In this study, a recombinant adenovirus vector carrying the hBD3 gene was successfully constructed. Then the osteogenic differentiation of hPDLCs with hBD3 gene modification in an inflammatory environment and the potential mechanisms were investigated. Further experiments were conducted in vivo to observe the effects of rat PDLCs (rPDLCs) with hBD3 gene modification on periodontal repair and regeneration in the rat periodontitis model.
RESULTS
Infection efficiency of Ad-hBD3 and hBD3 overexpression in the hPDLCs A positive correlation between the multiplicity of infection (MOI) value and mean fluorescence intensity was observed with adenoviral MOI values ranging from 100 to 200 (Fig. 1a). The flow cytometric results (Fig. 1b) Effects of hBD3 gene transfection on the osteogenic differentiation of the hPDLCs Escherichia coli LPS (1 μg·mL −1 ) was added to cells to stimulate an inflammatory microenvironment. 31 alkaline phosphatase (ALP) and alizarin red S (ARS) staining assays were conducted to detect any osteogenic differentiation changes. Quantitative real-time PCR (qRT-PCR) and western blotting (WB) were performed to observe the osteogenesis-related gene and protein expression in hPDLCs. The results demonstrated that ALP and ARS staining were darker in both the Ad-hBD3 and Ad-hBD3+LPS groups (Fig. 2a, b). The mRNA and protein expression levels of ALP, Runx2 and COL1 were upregulated in the Ad-hBD3 and Ad-hBD3+LPS groups compared with those in the empty vector (NC) group (Fig. 2c, d). There were no significant differences between the Ad-hBD3 and Ad-hBD3+LPS groups.
Role of the p38 MAPK pathway in the osteogenic differentiation of the hPDLCs transfected with Ad-hBD3 In this experiment, all groups were assessed in the inflammatory microenvironment created with E. coli LPS. The relative expression of phosphorylated (p)-p38, a specific protein in the p38 MAPK pathway, was higher in the Ad-hBD3 group (Fig. 3a). The ALP activity and mineralization levels of the hPDLCs were higher in the Ad-hBD3 group than they were in the NC group. However, after adding the p38 MAPK pathway inhibitor SB203580, the promotion effect was significantly suppressed (Fig. 3b, c). Similarly, the expression of ALP, Runx2 and COL1 in the Ad-hBD3 group decreased significantly at both the mRNA (Fig. 3d) and protein levels ( Fig. 3e) after SB203580 was added.
Experimental periodontitis and rPDLC transplantation rPDLCs were isolated from the PDL tissues of Sprague-Dawley (SD) rats (Fig. 4a). The immunofluorescence results showed that the cultured cells expressed vimentin (red) but not cytokeratin, which indicates that they were derived from mesoderm. The three cell growth phases, lag, log (exponential) and plateau, were observed on the cell growth curve generated from the Cell Counting Kit-8 (CCK-8) measurements. After transfection with Ad-hBD3, the rPDLCs successfully expressed hBD3 (Fig. 4b).
After ligation of the bilateral maxillary second molars, rat periodontitis models were established. rPDLCs transfected with hBD3 were transplanted into the palatal gingival tissues near the ligatured molars at mesial, middle and distal sites. Frozen sections of the injected gingival tissues were observed by confocal fluorescence after 2 h and 24 h. The images of green fluorescent protein (GFP) carried by the Ad-hBD3-transfected rPDLCs indicated that the rPDLCs had been successfully transplanted into periodontal tissues (Fig. 4c).
Effects of the hBD3 gene-transfected rPDLCs on bone repair in the SD rats with periodontitis Two weeks after rPDLC transplantation, the bilateral maxillary bone was sampled for analysis by micro-computed tomographic (micro-CT) scanning. The 3D images showed that the alveolar bone loss around the ligatured molars of the control group was much more obvious than that of the blank group, which means that the periodontitis models were successfully created. In addition, the Ad-hBD3 group presented less bone resorption, a higher bone mineral density and a higher bone volume ratio than presented by the NC group (Fig. 5). These results indicated that the transplantation of the hBD3 genemodified rPDLCs ameliorated bone loss and potentially promoted periodontal repair in the SD rats with periodontitis.
Effects of hBD3 gene-transfected rPDLCs on periodontal tissue destruction in the SD rats with periodontitis Haematoxylin and eosin (H&E) and Masson's trichrome staining results showed that, in the control group, the alveolar bone around the second molars was obviously absorbed, and hyperplasia of gingival epithelial spikes and a thickened stratum spinosum were also observed, as was inflammatory cell infiltration and collagen fibre destruction. In the Ad-hBD3 group, in which the rPDLCs were transfected with the hBD3 gene, the morphology of the alveolar bone in the ligation area appeared higher and thicker compared with that of the control and NC groups. We also observed fewer inflammatory cells and lower epithelial hyperplasic spikes in the gingival epithelium, and the collagen fibres were arranged in a more orderly fashion in the Ad-hBD3 group than they were in the control group (Fig. 6).
Effects of hBD3 gene-transfected rPDLCs on the p-p38 expression in the SD rats with periodontitis An immunohistochemistry assay of p-p38 was conducted to evaluate whether the p38 MAPK pathway was expressed in the rPDLCs with hBD3 gene modification applied in vivo. We found that p-p38 was expressed mostly in gingival epithelial tissues. The blank and control groups showed low levels of p-p38 expression, while the Ad-hBD3 group exhibited the highest p-p38 expression levels (Fig. 7a). The mean density of p-p38 in the Ad-hBD3 group was significantly higher than that in the control group (Fig. 7b).
DISCUSSION
Inflammation-induced bone loss has recently become a research hotspot due to the profound effects of inflammatory responses on local and systematic bone metabolism. 1 Periodontitis is considered to be one of the diseases with inflammatory responses and destruction of periodontal tissues, including evident alveolar bone loss. 32,33 In our study, 1 μg·mL −1 E. coli LPS was chosen to create the inflammatory microenvironment, because LPS was proven in previous studies to induce an hPDLC inflammatory response. 34,35 hBD3 is known as a kind of cationic antimicrobial peptide generally considered to have excellent antibacterial activity and certain immunomodulatory functions. 18,36 In this study, we evaluated its effect on the osteogenesis of hPDLCs. hPDLCs with overexpressed hBD3 were constructed have more sustainable and stable expression of hBD3. Gene delivery technologies have become an emerging approach in biomedical fields. 37,38 High transduction efficiency and low insertional mutagenesis have rendered adenoviral vectors as attractive gene delivery vehicles. 39 Moreover, adenoviral vectors exhibit strong and sufficient effects. 40,41 In our study, hPDLCs transfected with adenovirus containing the hBD3 gene were found to express the hBD3 protein for at least 7 days, a finding consistent with the results of a previous study. 42 Acting as a vital component of the periodontium, hPDLCs participate in periodontal tissue regeneration and are widely used in tissue engineering because of their characteristic ability to produce collagen and bone-associated proteins, such as OCN and osteopontin. 6,18,43 In this study, in addition to remarkable ALP activity and impressive mineralized nodules, hBD3 gene-modified hPDLCs presented with significant expression of osteogenic proteins; all of these results indicated that hBD3 promoted osteogenic differentiation of the PDLCs.
Primary culture rPDLCs modified by hBD3 gene transfection were transplanted into the rat periodontitis models. The results from the micro-CT and histology assays showed obvious bone repair and alleviated inflammatory response in the Ad-hBD3 group. In vitro, we proved that hBD3 gene transfection promoted the expression of osteogenic indicators and osteogenic differentiation of the PDLCs. After transplantation, the hBD3 gene-modified rPDLCs might have had better osteogenesis ability to enhance osteogenesis during the bone remodelling process. The specific mechanism might involve the regulation of angiogenesis and the recruitment of stem cells, 30 or the inhibited osteoclast formation and bacterial activity, 44 as indicated by previous studies with other antimicrobial peptides. 45 We have demonstrated that hBD3 enhanced the osteogenesis of the PDLCs and p-p38 expression in vitro. After adding p38 MAPK pathway inhibitor SB203580, the promotion effect was significantly suppressed, which means that the process might be regulated by the p38 MAPK pathway in vitro. Similarly, we also detected higher expression levels of p-p38 in the hBD3 gene-modified rPDLC transplantation group than we found in the control group in vivo. However, in vivo environments are much more complicated owing to the multiple signalling pathways involved, and the MAPK pathway may play only a potential role in the process of bone repair in vivo. Further research concerning the detection of downstream targets of the MAPK pathway is also needed to identify the specific mechanisms. Furthermore, it is important for long-term tracing and the observation of transplanted cells in vivo. More research is needed to further observe the localization of transplanted PDLCs and the expression of related bone markers. The inherent antibiotic and immunomodulatory characteristics of hBD3 may have affected the osteogenic differentiation of hBD3 gene-modified PDLCs. More research is needed with a control group consisting another antibiotic agent to confirm the mechanism.
In conclusion, our study evaluated the osteogenic effect of hBD3 gene-modified hPDLCs in vitro and in vivo. The hBD3 gene- Fig. 3 The role of the p38 MAPK pathway in the osteogenic differentiation of the hPDLCs after Ad-hBD3 transfection in an inflammatory environment. hPDLCs were transfected with Ad-hBD3 or NC (Ad-GFP) and all were treated with E. coli LPS (1 μg·mL −1 ). a p38 and p-p38 expression in the hPDLCs on day 3, b ALP staining and ALP activity on day 7, c alizarin red S staining of the mineralized nodules on day 21. d, e Bone-related gene and protein expression after adding SB203580 (**P < 0.01, ***P < 0.001) modified hPDLCs showed greater osteogenic ability in vitro through the p38 MAPK pathway. Furthermore, hBD3 was able to alleviate the inflammatory destruction of periodontitis along with the promotion of bone repair in vivo. hBD3 may play multiple roles in the treatment of periodontitis, and hBD3 gene-modified hPDLC transplantation may be a potential approach for the treatment of periodontitis.
Adenoviral vectors and gene transfection
The recombinant adenovirus, which carried the GFP (NC, Ad-GFP) or hBD3 gene (Ad-hBD3), was purchased from GenePharma (Gene-Pharma, Shanghai, China). The hPDLCs (1.0 × 10 5 cells per well) were seeded in 6-well plates. After reaching 80% confluence, the cells were transfected with Ad-GFP (NC) or Ad-hBD3 at different MOI values, ranging from 100 to 200. GFP was used as the reporter molecule to assess the transfection efficiency of Ad-hBD3, and the infection efficiency was determined by flow cytometry. Ad-hBD3-hPDLCs were assayed for hBD3 expression by WB.
Quantitative real-time PCR The mRNA levels of osteogenesis-related genes were determined by qRT-PCR. Six-well plates with growth medium were chosen to seed hPDLCs (1 × 10 5 cells per well). Transfection of Ad-GFP (NC) and Ad-hBD3 was conducted when the cell fusion rate reached 80%. On day 3, after hBD3 expression was detected, E. coli LPS (1 μg·mL −1 ) was added to the medium and incubated for 24 h to simulate an inflammatory microenvironment. 46,47 TRIzol reagent (Tiangen, Beijing, China) was used to extract total RNA from the cells. The reverse transcription of total RNA to cDNA was performed by using the PrimeScript RT Reagent Kit (TaKaRa, Otsu, Japan). The sequences of the primers for qRT-PCR are shown in Supplementary Material. Each gene cycle threshold (ct) was normalized based on the ct of glyceraldehyde 3-phosphate dehydrogenase, which was determined simultaneously on the same plate, and then calculated by the comparative 2 −ΔΔCt method. All of the samples were run in triplicate.
ALP activity and staining assay An ALP assay kit (Abcam, MA, USA) was used to assess ALP activity. After transfection, hPDLCs (3.0 × 10 4 cells per well) were seeded into 24well plates. After overnight culture, the medium was changed into an osteogenic differentiation medium with AuNPs (45 nm, 10 μmol·L −1 ). After culturing for 7 days, the cells were rinsed twice with phosphatebuffered saline (PBS). After a series of steps following the manufacturer's instructions, the final solution was added to the plates. The absorbance was assessed at 405 nm by a SpectraMax M3 microplate reader (Molecular Devices, Sunnyvale, CA, USA). The ALP activity level was determined, relative to that of the control group, as a percentage against a standard curve. The extent of the ALP staining was determined on the same day. The plates were rinsed twice with PBS and then fixed in 4% paraformaldehyde for 30 min. Next, the BCIP/NBT ALP Staining Kit (Beyotime Institute of Biotechnology, Shanghai, China) was used for cell staining according to the manufacturer's instructions. The stained plates were air-dried and examined under a light microscope (Olympus IMT-2, Tokyo, Japan) and photographed with a digital camera (Canon EOS 550D, Tokyo, Japan).
ARS staining
The cells in all groups were incubated in osteogenic medium for 3 weeks and then rinsed twice with PBS and fixed in 4% paraformaldehyde for 30 min. The cells were washed with distilled water (DW), treated with 2% ARS solution (Sigma-Aldrich, USA) for 5 min and then washed 3-5 times with DW to remove unbound ARS. The stained plates were air-dried and examined under a light Five-week-old adult female SD rats were sacrificed, and then their six molars were extracted and placed into 0.1% collagen I solution and shaken for 2 h at 37°C. After centrifugation, the cells were resuspended in growth medium at 37°C in 5% CO 2 for 3-5 days, and then the level of cell adherence was observed. The culture medium was refreshed every 2 days. After 1-2 passages, the cells were fixed with 4% paraformaldehyde for the subsequent immunofluorescence staining of anti-cytokeratin and antivimentin (CST, MA, USA), which were used to identify the cell origin. The cells were seeded into 96-well plates (5.0 × 10 3 cells per well), cultured for 10 days, and then 10 μL of the reagent from CCK-8 was added to each well on each day and incubated at 37°C in 5% CO 2 for 4 h. The absorbance was measured with a SpectraMax M3 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a wavelength of 450 nm. Cell viability was calculated as the relative absorbance after excluding the background absorbance.
Experimental periodontitis and rPDLC transplantation Twenty female SD rats (5 weeks old) were randomly allocated into 4 groups (each group with 5 SD rats): a blank group (without ligature or cell transplantation), control group (with ligature alone), NC group (with ligature and cell transplantation treated with empty vector), and Ad-hBD3 group (with ligature and cell transplantation treated with Ad-hBD3). Silk threads were soaked in a bacterium solution of P. gingivalis for 2 h ahead, and then they were used to ligature the bilateral maxillary second molars of the SD rats. The rPDLCs subjected to different treatments were dissociated into cell suspensions with 0.9% NaCl (1 × 10 4 cells per μL) and then injected with a 100-μL micro-syringe (Hamilton, Bonaduz, Switzerland) into the mesial, middle and distal sites of the palatal gingival tissues around the ligatured molars. The palatal side was selected for transplantation to provide the best possible visual field and operability, and the transplanted cells could migrate around the entire tooth. After 2 and 24 h, the relevant gingival tissues were collected and processed into frozen sections to confirm a successful rPDLC transplantation. rPDLC transplantations were performed once per week, and 2 weeks later, all rats were sacrificed, and samples were taken.
Micro-CT scanning
The maxillary bone samples of the SD rats were trimmed and placed into 4% paraformaldehyde fixative solution for 24 h. The next day, the samples were collected and prepared for micro-CT scanning with a Skyscan 1176 scanner (Bruker, Karlsruhe, Germany). The scanning parameters were as follows: a rotation angle of 360°, a tube voltage of 70 kV, a tube current of 353 μA, an X-ray exposure time of 404 ms, and a scanning layer thickness of 18 μm. The data were reconstructed with the NRecon software and then imported into the CTVox and CTAn software to obtain 3D images and relative data.
Histological analysis and immunohistochemistry assay for p-p38 detection Two weeks after rPDLC transplantation, the SD rats were sacrificed by euthanasia. The right maxillas were collected and fixed in 4% paraformaldehyde for 48 h and then placed into 10% EDTA decalcifying solution for 1 month. Afterward, the specimens were dehydrated with a gradient series of 40%, 50%, 60%, 70%, 80%, 90%, and 95% ethanol for 12 h each at each stage, and they were finally soaked in a 95% ethanol and xylene solution mixture (1:1) for 12 h to make them transparent. After sectioning the samples to a thickness of 5 μm, they were stained with H&E and Masson's trichrome. To conduct the immunohistochemistry assay of p-p38, the sections were stained with p-p38 antibody, and the mean density of p-p38 was measured in three different samples from the same group. A light microscope was used to observe the local histological structures.
Statistical analysis
In our study, in vitro experiments were repeated at least three times, and in vivo experiments included at least three samples from each group to reduce errors and support the statistical analysis. Statistical calculations were performed with the SPSS 23 statistical software (SPSS, Chicago, IL, USA). Depending on the context, significant differences were determined using Student's t test or one-way analysis of variance (ANOVA) followed by Bonferroni test. Significance was determined on the basis of the independent sample t test or ANOVA and was indicated with P values < 0.05. | 5,180 | 2020-04-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Enhanced laser-energy coupling to dense plasmas driven by recirculating electron currents
The absorption of laser energy and dynamics of energetic electrons in dense plasma is fundamental to a range of intense laser-driven particle and radiation generation mechanisms. We measure the total reflected and scattered laser energy as a function of intensity, distinguishing between the influence of pulse energy and focal spot size on total energy absorption, in the interaction with thin foils. We confirm a previously published scaling of absorption with intensity by variation of laser pulse energy, but find a slower scaling when changing the focal spot size. 2D particle-in-cell simulations show that the measured differences arise due to energetic electrons recirculating within the target and undergoing multiple interactions with the laser pulse, which enhances absorption in the case of large focal spots. This effect is also shown to be dependent on the laser pulse duration, the target thickness and the electron beam divergence. The parameter space over which this absorption enhancement occurs is explored via an analytical model. The results impact our understanding of the fundamental physics of laser energy absorption in solids and thus the development of particle and radiation sources driven by intense laser–solid interactions.
Introduction
Laser energy absorption by electrons is central to high intensity laser-plasma interaction physics. It strongly influences the optical properties of the plasma [1,2] and the characteristics of beams of high energy photons [3,4] and ions [5,6] produced. Given that the energy coupling to electrons is highly sensitive to a number of interrelated and evolving laser and plasma parameters, absorption remains one of the most challenging areas of the field to understand. Addressing this challenge is essential for the many applications which require efficient energy transfer to particles and radiation, such as advanced ignition schemes for inertial confinement fusion [7,8] and isochoric heating of matter [9].
In the case of dense plasma, the dominant laser energy coupling mechanisms, such as J×B acceleration [10], vacuum heating [11] and resonance absorption [12], are sensitive to parameters including the laser polarisation and intensity, its incidence angle onto target and the density scale length of the target plasma [13][14][15]. In addition, the occurrence of multiple absorption mechanisms and transitions between mechanisms as the laser-plasma interaction evolves, make it difficult to investigate the coupling dynamics experimentally. In the absence of experimental techniques to resolve the various coupling processes occurring on ultrashort timescales, we can progress our understanding of the overall absorption physics by determining the totallaser absorption from measurements of the reflected and scattered laser light [16,17], and its dependency on laser and plasma parameters. The resulting, essentially empirical, insight feeds into the future design of experiments and is ultimately essential for the verification of theoretical and numerical models of absorption physics.
To date, there are few experimental measurements of total absorption in the relativistic laser-dense plasma interaction regime. Results reported in Ping et al [18], for 0.8 μm wavelength laser pulses with duration equal to Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. 150 fs and energy between 0.02 and 20 J, demonstrate an intensity dependent scaling of laser absorption which extends beyond 90% at ∼2×10 20 W cm −2 . These results were utilised to develop an empirical scaling law [19] and to develop a theoretical model which identifies upper and lower bounds on laser-coupling for a given laser intensity [20]. Whilst this demonstrates that absorption depends strongly on laser intensity by variation of pulse energy, the influence of focal spot size and pulse duration on total energy absorption in the relativistic regime is not well characterised. Changes to the flux and temperature of the electrons inferred from measurements of laser-driven proton acceleration and x-ray generation indicate that there are more complex coupling dynamics at work when the focal spot size is varied and this merits further investigation [21][22][23].
In this article, we report on the first measurements of total absorption scaling as a function of laser intensity in which the pulse energy and focal spot size are separately varied, for the case of relativistic laser-foil interactions. We demonstrate that the scaling of the absorption with intensity by variation of focal spot size is significantly slower than by variation of laser pulse energy. Particle-in-cell (PIC) simulations reveal that enhanced absorption observed with relatively large focal spot sizes is a consequence of additional heating of the relativistic electrons which recirculate within the target between the sheath fields formed on the surfaces [24]. Using a simple analytical model, we explore the parameter space for which this effect occurs and show that it is a function not only of the focal spot size but also the laser-pulse duration, target thickness and the electron spectral and divergence distributions. The results highlight the importance of considering changes to absorption caused by the recirculating population of relativistic electrons in experiment design and ultimately in the development of laser-driven particle and radiation sources.
Methodology
The experiment was performed using the high power PHELIX laser at the GSI laboratory near Darmstadt. Linearly polarised pulses with duration, τ L , equal to (700±100) fs (full width at half maximum, FWHM), with energy, E L , varied between 4 and 180 J and a central wavelength of 1.053 μm, were incident at 0°(i.e. along the target normal axis) onto aluminium foil targets with thickness, l, equal to 6 and 20 μm. The laser intensity contrast was 10 −12 at 1 ns and 10 −4 at 10 ps, prior to the peak of the pulse [25]. The laser energy was changed by the rotation of a calibrated wave plate between two crossed polarisers between the front end and preamplifier sections. The energy range from 10 to 200 J can be selected without any additional change to the laser or beamline configuration, and consequently the contrast does not change significantly over that range. This was verified via test shots with extremely thin foil targets at full power and the use of a third-order scanning autocorrelator at low energies. An f/1.5 off-axis parabola was used to focus the laser pulse to a focal spot diameter, f L , which was varied in the range 4-270 μm (FWHM) by movement of the parabola toward a fixed target position. By separately varying E L and f L , we investigate laser absorption over the intensity range I L =10 17 -10 20 W cm −2 .
We developed and deployed two novel diagnostics which operate simultaneously to measure the total laser absorption. As shown in figure 1, a custom 180 mm diameter integrating (Ulbricht) sphere was developed to make measurements of the total unabsorbed laser pulse energy, from which the absorbed fraction is determined [26]. The sphere was divided into two hemispheres to enable access for a focal spot and target alignment camera prior to laser irradiation of the target foil positioned at the centre of the sphere. After the target was aligned the two hemispheres were moved together forming a tight seal at the interface. A 70 mm diameter aperture at the equator of the sphere accommodates the full focusing laser cone and a 30 mm diameter aperture at the bottom pole enables the target mounting and positioning. Two small 2 mm diameter apertures, 50°above the equator (60°anticlockwise along the equator from laser axis) and 45°below the equator (120°clockwise along the equator from laser axis), were used for multimode optical fibre connections. The other ends of the optical fibres were connected to Ocean Optics (Maya2000Pro) and Andor Shamrock (303i) optical spectrometers. The response of the sphere was calibrated by first measuring the energy throughput of the laser system, including all optics up to the sphere entrance, using a calibrated large diameter calorimeter. The sphere response was characterised by directly irradiating the sphere walls with the laser pulse at relatively low energy levels (up to a maximum of 1 J). For full power laser pulses the fibre optic cables were filtered using well-characterised neutral density filters prior to the fibre terminal to avoid the possibility of optical damage being induced inside the fibre. The absolute energy of the scattered light in the sphere was then calculated by scaling the measured signal by a factor of the neutral density response. The unabsorbed component of the laser pulse was measured by integrating around the central laser wavelength peak by±40 nm (the range over which the spectrum decreases to the background level). At all energies the spectrometers were filtered such that only the 1st harmonic could be observed. In order to measure the backscattered and specularly reflected light from the target, which escapes the sphere and is collected by the parabola, a scatter screen was placed behind the final turning mirror before the parabola (see figure 1(a)). The backscattered light was imaged using a custom dual wavelength imaging system, as shown in figure 1(d). An absolute calibration for the energy response of the imager was made by directly irradiating the scatter screen with light split from the incoming laser beam. A calibrated 50:50 beam-splitter was used, with one half directed onto a calorimeter and the other onto the scatter screen, thereby enabling a direct calibration of the imager response for laser pulse energy. These calibrations are shown in figures 1(e) and (f).
Measurements of laser absorption
We investigate the scaling of absorption with intensity in a similar range as the detailed investigation reported in Ping et al [18] (for otherwise different laser pulse parameters) and then extend our measurements to include variation of focal spot size. Specifically, we measured the change in absorption as a function of intensity by varying E L whilst keeping f L constant (at best focus; f L =4 μm). The results are shown in figure 2 as blue triangles for l=20 μm and as black squares for l=6 μm targets. We also plot the empirical model presented in [19] and a fit to our data with the same form: where f abs is the absorption fraction, I L λ 2 is the irradiance in units of W cm −2 μm 2 , P is a fitting parameter and A=3.37×10 20 W cm −2 μm 2 (from Davies [19]). We find that our measurements are in good agreement with the predictions of the Davies model, and thus by extension the measurements reported in Ping et al [18] (upon which that model was based). This is despite the different laser pulse duration and other parameters explored, as detailed above. We note that for significantly thinner targets (sub-micron), which may become subject to hole-boring due to the laser radiation pressure or expand to the point that relativistic self-induced transparency occurs during the interaction, a different scaling with intensity is expected due to a transition from surface-dominated to volumetric interaction processes (see for example [27][28][29][30]).
Our measurements show good agreement, in terms of scaling with I L , with previously published values for which f L is fixed (at a few microns) and E L varied. Published experimental measurements of the intensity scaling of the maximum energy of laser-accelerated protons and flux of x-rays suggests a different scaling when the intensity is varied by changing the laser focal spot size [21,23,31,32]. We extend our direct measurements of total energy absorption to investigate this. The red circle data points in figure 2 are the measured absorption values by varying f L , for fixed E L . Firstly, it is clear that this scaling is considerably slower than for the case of varying E L . Secondly, we note that for I L ∼5×10 17 W cm −2 there is a factor ∼3 higher total absorption when using a higher energy and large focal spot. The enhanced case has considerably more energy in the pulse (157 J compared to 4 J) and a considerably larger focal spot 270 μm compared to 4 μm.
PIC simulations of laser-energy coupling
We performed simulations using the fully relativistic PIC code EPOCH (in 2D) to investigate changes to the underlying absorption physics arising from variation of f L . The laser light was linearly polarised and delivered in a τ L =550 fs (FWHM) Gaussian pulse (a τ L =30 fs case is considered later). The target was Al 11+ with an initial, neutralizing electron density equal to 30n c (where n c is the critical density; the density at which the plasma frequency is equal to the laser frequency) and thickness varied between 6 and 50 μm. The initial electron and ion temperatures were 10 keV and 40 eV, respectively. A box size of up to 84 μm × 140 μm was used, with 12600 × 11520 cells and 10 particles per cell.
In the first instance, we selected I L and f L values to be the same as two of the experimental data points, to enable comparison. We consider a peak intensity of ∼1×10 19 W cm −2 , with f L =50 μm and f L =4 μm.
Since it is not possible to deconvolve the incoming and reflected laser fields from the fields associated with the expanding electron population, the energy content and temperature of the electrons is considered in lieu of an absorption measure. Example electron spectra at a time t=334 fs after the peak of the pulse interaction (i.e. the time prior to the fastest electrons leaving the simulation box) are shown in figure 3(a). This exhibits behaviour consistent with the experiment, namely that the larger spot case results in a higher total energy of relativistic electrons compared to the smaller spot case. The inset of figure 3(a) shows the normalised energy difference in the total electron energy above the initial thermal energy for f L =50 μm compared to f L =4 μm (ò 50μm and ò 4μm , respectively) as a function of time. The energy gain increases as the intensity increases on the rising edge of the laser pulse and saturates towards the peak of the laser pulse interaction. The gain primarily comes from a non-uniform heating of electrons at the higher end of the spectrum, which changes the temperature of the high energy electrons. This result cannot be accounted for by established electron temperature scaling laws (for Measured total laser energy absorption as a function of irradiance. The black square and blue triangular data points are measurements made when the intensity is changed by varying E L for l=6 μm and l=20 μm, respectively. The dot-dash line is a fit to this data using the empirical model presented in [19] and shown in equation (1), with the parameters P=0.26 and A=3.37 × 10 20 W cm −2 μm 2 . The red circular data points are measurements made by varying f L for constant E L . example, those presented in [33,34]) or the empirical scaling of Davies [19] discussed earlier, because I L is constant.
In order to establish if this is a surface interaction effect (e.g. changes in the absorption mechanism or deformation of the critical density surface) or if it arises due to volumetric effects, we investigated the effect of changing l for both f L =50 μm and f L =4 μm. The target thicknesses investigated were l=6, 20 and 50 μm. The results of these simulations are shown in figures 3(b) and (c), again by extracting the electron spectrum sampled 334 fs after the peak of the pulse. For f L =4 μm the electron spectrum and temperature are essentially independent of target thickness ( figure 3(b)). By contrast, for the f L =50 μm case ( figure 3(c)), there is a significant change in the electron spectra as l is changed, with the thinnest targets resulting in the highest temperature and electron energies. For large f L , decreasing l leads to significant changes in the measured electron spectra. For small l, increasing f L also leads to significant changes in the spectra for constant I L . This change (and the correlated change in the total absorption) therefore cannot be a purely surface, field driven, effect and must be related to the relativistic electron dynamics within the target bulk.
Mackinnon et al [35] shows via PIC simulations that the recirculation of relativistic electrons between the sheath fields [24] which build up on the surfaces can enhance the overall electron density and temperature at the rear and thereby enhance the energy of sheath-accelerated protons. Here, we propose that this recirculating electron population is in fact extracting additional energy from the laser pulse upon returning to the front surface, resulting in a higher overall laser-to-electron coupling efficiency. We test this hypothesis by reducing τ L such that an electron close to the average energy of the spectrum (≈5 MeV) could only make one pass within the target during the pulse duration and therefore gain no (or limited) additional energy from this recirculating process. As shown in figure 3(d), we measure essentially no difference in the electron spectra between the two focal spot sizes. This shows that the recirculation of relativistic electrons within the target is playing an important role in the laser-energy coupling dynamics in the interaction. It is difficult to directly observe the temporal dynamics of this change in the simulations after a large number of passes, given that there is a population of electrons constantly being injected while the pulse is present, in addition to the recirculating population. Since there is a large range of electron energies, after a certain distance or a number of passes through the target, the two populations become spatially indistinct. However, in the early stages of the interaction it is possible to observe the recirculating population and the additional energy gain at the front surface. This is shown in figure 4 as a time-space map of the density of electrons with energy greater than 150 keV and propagating counter to the incoming laser pulse. While the energy gain and its dependence on the number of round-trips (given a sufficiently large spot size, pulse length and electron energy) is clear from the simulations, the precise mechanism of the energy exchange is more difficult to extract. The process appears similar to that proposed in Krygier et al [36], where electrons leave the target propagating toward the laser pulse and are turned around by a loop magnetic field, extracting additional energy from the laser pulse via direct laser acceleration (DLA) [37]. That process typically takes place in a long density scale length plasma. For a short density scale lengths, the recirculating electrons are reflected by the sheath fields building up on the target front and rear surfaces and extract additional energy during each interaction with the laser field at the target front.
A geometric model of recirculation-enhanced absorption
These results highlight that laser intensity alone is not sufficient to predict changes in the absorption or the internal electron spectrum. Whereas I L can be changed by varying E L , τ L or f L , the coupling dynamics can be radically different depending on which parameter is changed. There is a multi-dimensional parameter space in which the recirculation-enhanced absorption can occur. In order for additional energy to be extracted from the laser pulse the recirculation time (i.e. the transit time for a round trip) must be shorter than τ L and the divergence angle of the electrons must be small enough that a sufficient number of electrons remain within the focal spot region upon their return to the front surface. The recirculation time and degree to which electrons stay within the region of the focal spot is also affected by the target thickness.
To explore this multi-dimensional parameter space in a tractable way, we construct an analytical model based purely on a ballistic (i.e not affected by self-generated fields or scattering within the target) approach to electron transport and recirculation, whereby an electron will gain a constant fraction of its initial energy upon re-interacting with the laser at the target front surface. Electron energies above 1 MeV are considered and therefore the effect of collisions is ignored, given that the distance travelled by the electron during pulse duration is small compared to the stopping range. The number of passes an electron will make during time τ L can be shown to be n l where ν e is the relativistically corrected electron velocity and θ div is the angle of divergence of the electron trajectory with respect to the central axis. For a given θ div it is also possible to show that the number of passes an electron will make before it leaves the laser focal spot region is If we take the energy gained, E gain , by an electron of energy E e from the laser to be a constant fraction α of the energy, then it can be shown that the overall energy gained via recirculation will be the minimum of n τ or n spot . In other words, either the electron will gain energy until it leaves the region of the focal spot (due to its trajectory) or until the laser pulse switches off. This can be expressed as There is of course not simply a single electron energy or divergence angle, but in fact a distribution of both. We take then N E N e e E kT 0 e e = -( ) ( ) as the energy spectrum of the electron population, where T e is the electron temperature and N 0 is the initial electron number. The energy dependent divergence angle can be calculated using the formula of Moore et al [38]. This gives E tan 2 1 e div 1 q q g = --( ) (¯( )), where q is the average divergence angle.
Using this model, the total energy gained over the spectrum due to recirculation as a function of f L and l can be examined. In figures 5(a)-(c) this parameter is plotted for l in the range 6-25 μm and f L in the range 4-200 μm, for τ L equal to 50, 500 and 1000 fs. The total energy gain is normalised to the total energy of the input spectrum. It can be seen that for τ L of the order of the electron transit time (i.e tens of femtoseconds) there is no additional energy gain, because the recirculating electron population does not re-interact with the laser pulse a large number of times. For longer pulse durations, as in figures 5(b) and (c), there is significant energy gain for l<20 μm, but only when f L is greater than ∼50 μm. The total energy gain is principally as a result of an increase in the hot tail of the spectrum, as observed in the simulations. This occurs because the highest energy electrons have the smallest divergence in the distribution and thus shortest recirculation time, and therefore interact more times the laser pulse. The higher divergence of the lower energy electrons means they move out of the region of the focal spot in a fewer number of passes and also return to the front surface a fewer number of times.
It is important to note that the range of l and τ L for which absorption enhancement occurs is highly dependent on the choice of energy spectrum and divergence-energy functions, as well as the value of the gain coefficient α. Furthermore, the temporal evolution of the forward going electron population, as a result of the temporal evolution of the laser heating at the front surface, is not considered. Instead, we are assuming that a high energy electron population exists and allow this population to freely recirculate. Also, the population of relativistic electrons which escape the target (although often only on the order of a few percent) is not included in the model. These complexities, especially in regard to the self-consistent dynamics of the laser-injected and recirculating populations, quickly make this a complex problem to solve analytically. To that end, this simple model should be seen not as a calculation of the overall change in absorption, but illustrative of the parameter space in which recirculation-enhanced absorption is relevant.
Summary
In summary, these results demonstrate the significance of laser focal spot size in the absorption of energy into thin foils in the relativistic laser-plasma interaction regime. Complementary diagnostic techniques have been applied to determine the total energy absorption via calibrated measurements of the total laser energy specularly Figure 5. l-f L parameter space plots. The colour maps corresponds to the difference in total energy between the input and output spectra for τ L equal to: (a) 50 fs; (b) 500 fs; (c) 1000 fs. reflected and scattered from the target foil. Good agreement is found with previously published results [18] of absorption scaling with laser intensity when varying the pulse energy. However, a slower scaling is measured when varying the focal spot size for a fixed laser energy, which is consistent with the picture emerging from investigations of secondary particle and radiation source properties as a function of focal spot size. PIC simulation results show that this slower absorption scaling with intensity is driven by additional heating of the relativistic electron population which recirculates within the foil between the sheath fields formed on the surfaces. This occurs when the focal spot size and pulse duration are sufficiently large and the electron beam divergence is sufficiently small to enable the recirculating population to interact multiple times with the laser field. The effect is diminished or does not occur in tight focal spot geometries or for femtosecond-scale laser pulse durations (with micron-thick targets).
The results are not only of fundamental interest, but also impact on the optimisation of laser energy coupling in relativistic laser-foil interactions and therefore influence the development of the laser-driven ion acceleration, high harmonic generation, x-ray production and other sources involving foil targets. They show that the method of varying the laser intensity (i.e. by variation of pulse duration, energy or focal spot size) can significantly change the overall energy coupling into the plasma, and by extension key properties of the secondary beams of particles and radiation produced. | 6,077.4 | 2018-03-28T00:00:00.000 | [
"Physics"
] |
Sulphuric Acid Leaching of Spent Nickel Metal Hydride Car Batteries "2279
The treatment of spent nickel metal hydride batteries (NiMHs) of Lexus vehicles to recover nickel (Ni) and cobalt (Co) as well as rare earth elements (REEs) including La, Ce, Nd and Y was investigated. Co-extraction of Al, Fe, Cr and Cu has also been examined. Following batteries’ manual dismantling to remove metallic cases, outer plastics and current collectors, the remaining parts including cathodes of black coloured nickel (oxy)hydroxides, anodes consisting of a nickelcontaining alloy (AB5 mischmetal type), and separators were simultaneously ground down to −5 mm using a hammer mill equipped with sieves. The fine (−1 mm) fraction of this product was further subjected to sulphuric acid leaching to recover the high-value elements contained. Acid consumption of 14 mol H2SO4 per kg of this fraction was found to be sufficient to decrease pH to less than 1. Leaching experiments were performed using 0.5, 1 and 2 M sulphuric acid solution at 5% pulp density and temperature 50, 75 or 95 ◦C. The optimum conditions for the extraction of all elements were 2M H2SO4 concentration and temperature of 75 ◦C with the exception of Ni extraction, which reached its highest value at 95 ◦C and 2M H2SO4 concentration. Extractions of 93.34% of Ni, 99.03% of Co and 100% of REEs were achieved at these conditions.
Introduction
NiMH batteries represent the evolution of the nickel-hydrogen (Ni-H 2 ) battery, as hydrides have replaced hydrogen to avoid the danger of working with gases at high pressures. They have replaced, along with lithium-ion batteries (LIBs), NiCd batteries, due to the presence of toxic cadmium. NiMHs entered the market in 1991, introduced by the Japanese company Sanyo [1].
NiMH batteries are widely used in many energy storage applications. They are mostly used in all electric plug-in vehicles, hybrid vehicles, robots [2]. Moreover, some typical applications are the power tools, cell phones, hand tools, emergency lighting, laptop computers, calculators, GPS systems [3].
The nickel-metal hydride battery is a type of secondary battery and can be fully recharged. NiMH batteries contain significant amounts of critical metals that include Ni, Co and rare earth elements (REEs) such as Ce, Pr, Nd, the recycling of which is very important as it contributes to the Circular Economy model. In addition, Co and REEs are classified by the European Union as extremely critical components due to the high risk of their supply [4,5]. Ni-MH batteries have many advantages over other types of secondary batteries, especially concerning energy density and life cycle. They are capable of being recharged hundreds of times with battery life age restricted to five years or less. NiMH batteries are characterized by their energy density being translated in to either long working times, or reduction in the battery space needed. They are safe and can be manufactured in virtually many sizes (10 mAh-5 Ah). They also have some disadvantages, such as lower charging efficiency and issues with automatic charging which become worse when the batteries are in high temperature environments [4].
The main parts of a Ni-MH battery are the anode, cathode, electrolyte, separator and the steel case [6]. The anode is made of hydrogen storage alloy powder based on mischmetal-and nickel-containing substituents. Nowadays, all the NiMH cells are made of AB5 metals due to their better performance. The cathode consists of nickel coated with nickel hydroxide. The approved electrolyte in such batteries is KOH.
The electrochemistry of the nickel-metal hydride battery is generally represented by the following charge and discharge reactions. The overall reaction taking place in Ni-MH batteries is [2]: At the positive electrode, the charge reaction is based on the oxidation of nickel hydroxide just as it is in the nickel-cadmium couple.
European industry demand for nickel, cobalt and REE is increasing and, since Europe is not self-sufficient, recovery from secondary sources such as NiMH batteries is of great importance for the coming years.
Materials and Methods
The experimental process followed includes several different stages. Each battery consists of eight individual cell batteries, metallic cases, outer plastics and current collectors. Manual dismantling was used to remove metallic cases, outer plastics and current collectors, while the remaining parts including cathodes of black coloured nickel (oxy)hydroxides, anodes consisting of a nickel-containing alloy (AB5 mischmetal type), and separators were ground down to −5 mm using a hammer mill equipped with sieves. The finer (−1 mm) fraction of this product was chemically analysed by X-ray Fluorescence Spectroscopy (XRF) and fusion and its mineralogical analysis was determined by X-ray Diffraction analysis (XRD) and scanning microscopy (SEM).
The sample was then subjected to sulphuric acid leaching, in order to recover the metals of interest. All experiments were conducted in a 500-mL five-necked, round-bottomed glass split reactor, which was fitted with a glass stirrer, a vapour condenser and a thermometer. In all the experiments, a constant stirring speed was applied to ensure suspension of the particles. Heating was provided by an electrical mantle and the temperature of the liquid was controlled by a Pt-100 sensor. Acid consumption of 14.16 moles H 2 SO 4 /kg of a battery fine sample was determined to be sufficient to achieve the desired final pH value of 1. Nine leaching experiments were performed using 0.5, 1 and 2 M sulphuric acid solution at 5% pulp density, under stirring (450 rpm) and temperatures of 50, 75 or 95 • C. Each run lasted 90 min. The solid residues were filtered under a vacuum and analysed with SEM and XRD. All leach solutions were analysed by Atomic Absorption Spectrometry (Flame-AAS) and ICP-OES.
Results
The chemical analysis of the fine solid was conducted by XRF analysis as well as by fusion and analysis by AAS and ICP-OES. The results are presented in Table 1. No copper or chromium were detected. The sample consists mainly of nickel (Ni), lanthanum (La), cobalt (Co) and caesium (Ce), with a significant amount of neodymium (Nd) and yttrium (Y) and the basic metals Al, Fe and Mn. No copper (Cu) or chromium (Cr) were detected. Figure 1 presents SEM images of the sample. Figure 1a shows cathode material, consisting mainly of nickel and cobalt. Figure 1b shows anode material of AB5 type. The main mineralogical phases in the sample are LaNi 5 , Ce 2 Ni 7 , Ce 5 Co 19 , Ni, Ni(OH) 2 , NiH and SiO 2 , as they can be seen in the XRD pattern of the sample given in Figure 2. The sample consists mainly of nickel (Ni), lanthanum (La), cobalt (Co) and caesium (Ce), with a significant amount of neodymium (Nd) and yttrium (Y) and the basic metals Al, Fe and Mn. No copper (Cu) or chromium (Cr) were detected. Figure 1 presents SEM images of the sample. Figure 1a shows cathode material, consisting mainly of nickel and cobalt. Figure 1b shows anode material of AB5 type. The main mineralogical phases in the sample are LaNi5, Ce2Ni7, Ce5Co19, Ni, Ni(OH)2, NiH and SiO2, as they can be seen in the XRD pattern of the sample given in Figure 2. The results of the leaching experiments are given in Table 2. Figure 3 presents a comparison to the initial X-ray diffraction diagram of some of the solid leach residues to the initial sample pattern (blue colour), where it is obvious that the mineralogical phases of LaNi 5 , CeNi 7 , Ce 5 Co 19 and NiH do not appear, whereas Ni, Ni(OH) 2 and SiO 2 can still be determined. The results of the leaching experiments are given in Table 2. Figure 3 presents a comparison to the initial X-ray diffraction diagram of some of the solid leach residues to the initial sample pattern (blue colour), where it is obvious that the mineralogical phases of LaNi5, CeNi7, Ce5Co19 and NiH do not appear, whereas Ni, Ni(OH)2 and SiO2 can still be determined. The results of the leaching experiments are given in Table 2. % extraction is the ratio of the element mass recovered in the leach solution compared to the initial content in the sample. Figure 3 presents a comparative to the initial X-ray diffraction diagram of some of the solid leach residues to the initial sample pattern (blue colour), where it is obvious that the mineralogical phases of LaNi5, CeNi7, Ce5Co19 and NiH do not appear, whereas Ni, Ni(OH)2 and SiO2 can still be determined.
Conclusions
The recycling of NiMH batteries is of great importance in order to recover economically, technologically important metals (Ni, Co, REEs). The purpose of this work was to find the optimal conditions in order to achieve maximum leaching from NiMH batteries of Lexus vehicles. Based on the experimental results, the following conclusions can be drawn: The present work showed that metal recoveries of almost 100% can be achieved, during leaching, for Co, Ce, Y, Nd and La. The extraction of Ni did not follow this pattern and reached about 85% with leaching agents of 1 or 2M H2SO4. Maximum Ni recov- The results of the leaching experiments are given in Table 2. % extraction is the ratio of the element mass recovered in the leach solution compared to the initial content in the sample. Figure 3 presents a comparative to the initial X-ray diffraction diagram of some of the solid leach residues to the initial sample pattern (blue colour), where it is obvious that the mineralogical phases of LaNi 5 , CeNi 7 , Ce 5 Co 19 and NiH do not appear, whereas Ni, Ni(OH) 2 and SiO 2 can still be determined.
Conclusions
The recycling of NiMH batteries is of great importance in order to recover economically, technologically important metals (Ni, Co, REEs). The purpose of this work was to find the optimal conditions in order to achieve maximum leaching from NiMH batteries of Lexus vehicles. Based on the experimental results, the following conclusions can be drawn:
•
The present work showed that metal recoveries of almost 100% can be achieved, during leaching, for Co, Ce, Y, Nd and La. The extraction of Ni did not follow this pattern and reached about 85% with leaching agents of 1 or 2M H 2 SO 4 . Maximum Ni recovery was obtained with a 2M sulphuric acid solution at a temperature of 95 • C, reaching 93%.
•
The optimum conditions for the extraction of other than Ni elements were 2M H 2 SO 4 concentration and temperature of 75 • C. • A concentration of 0.5 M sulfuric acid for the tested liquid to solid ratio (20 L/kg) is not sufficient to achieve high metal recoveries.
•
Increasing the sulphuric acid concentration favours the metals extraction. • Increase in temperature does not seem to have a significant effect in the metal extraction.
Author Contributions: A.X. and P.O. contributed to the study conception and design. Material preparation, data collection and analysis were performed by E.P., P.O., K.B. and P.T. All authors contributed equally to the interpretation of the results and provided critical feedback. The first draft of the manuscript was written by E.P. and revised by all authors. All authors have read and agreed to the published version of the manuscript.
Funding:
The work was financed by NTUA resources.
Institutional Review Board Statement: Not applicable. | 2,613.8 | 2022-04-12T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Driveable Area Detection Using Semantic Segmentation Deep Neural Network
. Autonomous vehicles use road images to detect roads, identify lanes, objects around the vehicle and other important pieces of information. This information retrieved from the road data helps in making appropriate driving decisions for autonomous vehicles. Road segmentation is such a technique that segments the road from the image. Many deep learning networks developed for semantic segmentation can be fine-tuned for road segmentation. The paper presents details of the segmentation of the driveable area from the road image using a semantic segmentation network. The semantic segmentation network used segments road into the driveable and alternate area separately. Driveable area and alternately driveable area on a road are semantically different, but it is a difficult computer vision task to differentiate between them since they are similar in texture, color, and other important features. However, due to the development of advanced Deep Convolutional Neural Networks and road datasets, the differentiation was possible. A result achieved in detecting the driveable area using a semantic segmentation network, DeepLab, on the Berkley Deep Drive dataset is reported.
Introduction
Advanced Driver Assist Systems (ADAS) and autonomous driving technology have greatly contributed to the explosive growth of the field of deep learning which was fueled by the massive collection and availability of road datasets for public usage.Among the many computer vision tasks involved in ADAS and Autonomous driving, road detection has been considered one of the most important topics of research especially in the scope of level 4 and level 5 of driving automation [1].The road perception algorithms are the first step for any subsequent path planner which aid in autonomous navigation.Remarkable progress has been achieved in road segmentation in the last decade both in feature-based and learning-based approaches.Automatic driving decisions collision-free on the segmentation roads from vehicles.Often highlevel information about roads such as ego-lane detection [2], object-lane relationships [3], etc is required for successful cognitive actions ensuring collision-free navigation.
With the advent of deep neural networks and publicly available datasets that aid in explicit labelling of road relates features such as drivable area, lane markings, etc there is a tremendous interest in extending the road detection problem to drivable area detection.In fact, much recent research had focused on the detection of flat areas which are considered to be drivable.Such high-level information allow self-driving cars to act very similarly to human drivers.The paper presents the details of the implementation of driveable area detection using a deep neural network developed for semantic segmentation.Driveable area detection involves precise segmentation of ego-lane i.e. the lane of travel of the vehicle on which the camera is mounted (egovehicle).The driveable area is segmented based on the availability of the lane markings.The dataset used for training in the research work presented is Berkeley Deep Drive (BDD) Dataset [4].
Related Work
Many researches and works have been going on to solve road segmentation problem since the development of deep neural networks.Even before the recent evolution of deep neural networks, road segmentation has been carried out using other computer vision algorithms, but the features learned by DCNNs were incomparably good.In a work, Road images were segmented into parking lanes, sidewalk and into a road using ariel and ground views of the image, with inputs from Camera, GPS, and IMU systems [5].Caltagirone L et al, in their work, detected road using DCNN with data from camera and LIDAR [6].They have applied FCN on a subset of the KITTI dataset for the experiment.In another work, a different approach was carried out to detect roads with minimal amounts of labeled data.The work used semi-supervised learning applied to inputs from camera and LIDAR data combined together.Yang X et.al extracted lane marking and detected road using a combination of Recurrent Neural Network and U-net to reduce the propagation error and improve the accuracy of detection [7].A model based on the conditional random field, which takes in inputs from both camera and LIDAR as random variables, was experimented by Xiao L [8].Various research has been carried out for road detection and lane marking.The introduction of the Berkley DeepDrive dataset gave rise to the experimentations on driveable area related research Semantic segmentation has evolved since the evolution of CNNs had taken heights.First of such state-of-the-art architectures, was developed by Long J et.al [9].Using the classification network's learned features, the model adapted the classification network into a fully convolutional network and fine-tuned it for semantic segmentation.Unlike [9], which has done end-to-end learning, the model proposed by Noh H et.al, which was built on the classification network VGG-16, integrated deep deconvolution network and proposals prediction [10].R-CNN model developed by Girshick R et.al applied convolutions on each of the region proposals extracted from the image and classified the regions into labels [11].This work gave rise to a new branch of research on combining region proposals and convolutions.Convolution and deconvolution on the image during segmentation, due to downsampling and pooling, misses the high-resolution information required for segmentation.In the model developed by Lin G et.al, to retain the high-level spatial information, residual connections between the convolution and deconvolution layers were given like identity mapping [12].An interesting work by Luc P combined the sematic segmentation network with an adversarial network.Using adversarial training, the inconsistencies between ground truth and the predicted map were detected and corrected [13].Then came the set of models that concentrated on dilated convolutions, that helped in getting the local information along with the global information, by increasing the receptive field of the network [14,15,16,17].One such model is DeepLab which combines deep convolutions, dilated convolutions at different range, and Conditional random fields.This model gained 79.7 percent mIOU on PASCAL VOC-2012 dataset.
Details of Dataset
To find a suitable dataset for the intended purpose of driveable area detection, the datasets available for road data were referred.PASCAL-VOC 2012 dataset [18], Microsoft COCO dataset [19], ADE20K dataset are some of the general semantic segmentation datasets.These datasets contain different objects of vast categories, thus helpful in training the network to identify any object present in the image.For some specific objects of interest to be identified by the network (e.g.road, car, signal, etc. for the autonomous driving scenario), the weights of the network can be fine-tuned by training it with objects specific dataset (e.g.Datasets with road objects).Some roadspecific datasets available are Cityscapes dataset [20], KITTI road dataset [21], Apolloscape dataset [22], Oxford RobotCar Dataset [23], Berkley Deep Drive dataset [4].Among these, the BDD dataset was chosen based on the merit of large scale annotations for driveable area segmentation.This information will be helpful in determining vehicle localization on the road with image data and in turn be helpful in decision making.Though few other datasets have annotations done for road segmentation, these were non-standard and very small in size.BDD dataset is promising in the scope of its attributes such as diverse nature, a sheer large number of annotations of images specifically made for driveable area detection.The dataset has images taken from a variety of geographical locations, environmental and weather conditions.The dataset has labels , "Directly Drivable Area", Alternatively Drivable Area"."Directly Drivable Area" is the area that the driver is currently driving on.The driver has priority of this area over the other cars."Alternately Drivable Area" is the area that the driver is not currently driving on , but can drive on it by changing lane.The dataset contains 91626 instances of Directly drivable area and 88392 instances of Alternatively drivable area.The annotation is given as a mask image which contains pixel level labels for drivable area, alternative area and the background.The dataset is also annotated for lane markings, object detection and instance segmentation.For all the work reported in this paper, the BDD dataset images with annotations for driveable area detection alone are utilized.The driveable area annotations consist of a pair of images for each example.One image is the raw RGB image of resolution 1280 × 720 pixels and the other image is the annotation image of same size which is also an RGB image.The annotation image has the ego-lane pixels in red, other lanes marked in blue and all other regions of the image in black.A sample image pair is shown in Fig. 1.
Deep Learning Network
The work presented in the paper is about the implementation details of applying a popular sematic segmentation network for the purpose for driveable area detection.Though conceptually driveable map inferencing is similar to the general semantic segmentation problem, the challenges involved in the former is very high.This is mainly because the classification is between two regions which are exactly identical in terms of all visual properties.The only differentiating element is the lane marking which is assumed to be of contrasting the color of the driveable area (roads).Deep convolutional neural networks have been the choice for various classification tasks in the computer vision system mainly attributed to the built-in invariance to local image transformations.But this property doesn't meet the requirements of dense inferences such as semantic segmentation where abstraction of information in the image is not a requirement.Among the Deep CNN based architectures available for semantic segmentation in the current work reported in the paper, DeepLab [24] is identified to be suitable for the intended purpose of driveable area detection.This is mainly because of its key property such as the good resolution of the features, good ability to tackle the problem of objects' existence at multiple scales and high localization accuracy which, in a deep CNN framework is difficult to obtain due to the invariance property.DeepLab enjoys these merits by strategically incorporating significant changes in the architecture.Some of the important changes introduced in the DeepLab which not only has benefited the semantic segmentation problem but also to a higher level of complexities such as the driveable area detection problem as follows: • Atrous convolution instead of convolution with downsampled filters: Generally CNNs would have a series of convolutional layers interleaved with pooling layer resulting in downsampling of filters.Atrous convolution does convolution after the last few pooling layers with upsampled filters instead of downsampled filters as in the case of regular convolution.This strategy basically improves the feature resolution which is very valuable in pixellevel inferences as in the case of driveable area detection.It may be noticed that the atrous convolution serves as a valuable replacement for the deconvolutional layers which is commonly found in most semantic segmentation networks [25].A high-level block diagram of the network architecture is shown in Fig. 2.
Fig. 2. Abstract Block Diagram of DeepLab Network
Beyond the specified advantages that DeepLab specifically addresses some important advantages that are relevant to the driveable detection problem from a practical stand-point are speed, accuracy, and simplicity of the network.Since the intended application is for a driving scenario real-time nature, accuracy and speed are primary concerns for any segment of algorithms.The detailed block diagram of the DeepLab network is presented in Fig. 3.
Results and Conclusion
In the lines of most semantic segmentation benchmarking, mIoU (mean intersection of union) is used as the evaluation metric to understand the performance of the network.Other evaluation metrics are not presented within the scope of this paper.Sample images from the validation set and the corresponding inference images, containing three classes viz., driveable area, alternately drivable area, background, are presented in Fig. 5.The intersection of union is measured between the inference images and the ground truth in the dataset [19].IoU for the first 48 images are presented in Fig.The IoU for driveable area detected using DeepLab semantic segmentation network is given in the paper.As part of the future work, driveable area detection would be carried out on stateof-the-art semantic segmentation networks and compared.And extraction of high level semantics useful for autonomous driving would be carried out using the detected driveable area.
Atrous spatial pyramid pooling: To address the problem of scale invariance, generally images of objects acquired at many scales are used.DeepLab utilizes a pyramid pooling strategy which is a computationally efficient way of ensuring scale invariance.The strategy involves the usage of multiple filters that have complementary fields of view.This ensures capturing of objects as well the context in multiple scales resulting in the scale invariance property.•Fully-connected CRFs: The property of accurate localization is incorporated through the use of fully connected pairwise conditional random fields.
Fig. 3 .
Fig. 3. Detailed Block Diagram of DeepLab Network The implementation of DeepLab network for the driveable area detection is carried out with Tensorflow API.Tensorflow provides pre-trained models of DeepLab which
Table 1 .
Key Training Specifications | 2,881.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
The mechanism of action of ethanolamine ammonia-lyase, a B12-dependent enzyme. Evidence for two intermediates in the catalytic process.
Abstract Ethanolamine ammonia-lyase, an enzyme catalyzing the adenosylcobalamin-dependent deamination of ethanolamine, also catalyzes the conversion of l-2-aminopropanol to propionaldehyde and ammonia. In this reaction, tritium is transferred from enzyme·[5'-3H]adenosylcobalamin to the C-1 position of l-2-aminopropanol, as well as to the α carbon of the product aldehyde. The labeling pattern is consistent with the mechanism of hydrogen transfer deduced with other substrates. Tritium transfer also occurs between enzyme·[5'-3H]adenosylcobalamin and propionaldehyde in the presence of NH4+. Unlike the deamination of ethanolamine, the conversion of 2-aminopropanol to propionaldehyde and NH4+ is reversible, since tritiated 2-aminopropanol was isolated from reaction mixtures originally containing only propionaldehyde, ammonia, and enzyme·[5'-3H]adenosyl cobalamin. The partitioning of tritium derived from enzyme·[5'-3H]-adenosylcobalamin between l-2-aminopropanol and propionaldehyde was determined in reactions begun with l-2-aminopropanol as well as in reactions begun with propionaldehyde and ammonia. In the first case, the ratio [3H]propanolamine to [3H]propionaldehyde is 1.8:1; in the second case, the ratio is 0.3:1. This difference in the partitioning of tritium from the tritiated enzyme complex is consistent with the notion that there are at least two intermediates in the catalytic process, each exchanging tritium with coenzyme, which interconvert slowly with respect to the rate of tritium exchange.
ammonia-lyase, an enzyme catalyzing the adenosylcobalamin-dependent deamination of ethanolamine, also catalyzes the conversion of L-2-aminopropanol to propionaldehyde and ammonia. In this reaction, tritium is transferred from enzyme' IS'-W]adenosylcobalamin to the C-l position of L-2-aminopropanol, as well as to the (Y carbon of the product aldehyde.
The labeling pattern is consistent with the mechanism of hydrogen transfer deduced with other substrates.
The partitioning of tritinm derived from enzyme. [5'-aH]adenosylcobalamin between L-2-aminopropanol and propionaldehyde was determined in reactions begun with L-Zaminopropanol as well as in reactions begun with propionaldehyde and ammonia.
In the first case, the ratio [3H]propanolamine to [3H]propionaldehyde is 1.8:1; in the second case, the ratio is 0.3: 1. This difference in the partitioning of tritiurn from the tritiated enzyme complex is consistent with the notion that there are at least two intermediates in the catalytic process, each exchanging tritium with coenzyme, which interconvert slowly with respect to the rate of tritium exchange. This is Paner 11 in this series. The pre&ous paper is Ref. sp., an adenosylcobalamin-requiring enzyme, catalyzes the conversion of ethanolamine to acetaldehyde and NH,+ (2). It has been reported that L%aminopropanol is a competitive inhibitor for that reaction (3). In the course of our studies on the mechanism of action of ethanolamine ammonia-lyase, we have confirmed this fact, but found, in addition, that L-2-aminopropanol is also a substrate for that enzyme.
This substrate displays some unique properties which enabled us to obtain further evidence in support of the mechanism previously proposed for adenosylcobalamin-dependent rearrangements (4-7). The results obtained with L-2-aminopropanol are reported in this and a subsequent communication.
MATERIALS AND METHODS
Enzyme, Coenzyme, and Substrate-Ethanolamine ammonialyase from Clostridium sp. (3) was purified and resolved of bound cobamides as previously described (8,9). Enzyme concentration was calculated on the basis of a molecular weight of 520,000 (8) and the highest specific activity reported of 45 units per mg (10). The enzyme possesses two active sites per molecule (11)(12)(13).
The enzyme was made substrate-free by dialysis against 0.01 M potassium phosphate buffer, pH 7.4.
AdoCbl' was obtained from Glaxo Laboratories. Ethanolamine, L-2-aminopropanol, and propionaldehyde were obtained from commercial sources and were redistilled before use. Both ethanolamine and L-Zaminopropanol were converted to their respective hydrochlorides.
All other reagents were of the highest purity obtained commercially.
L-2-Amino[ UJ4C]propanol was synthesized by reducing the trifluoroacetate salt of I,-[ U-Wlalanine wit,h diborane (14). The trifluoroacetate was prepared by adsorbing 171 pmoles (7.7 x lo5 dpm per pmole) of L-[ U-14C]alanine (New England Nuclear) onto a column (5 x 40 mm) of Dowex 50-X8 resin (H+ form). After washing the column with 20 ml of water, L-[Wlalanine was eluted with 1.5 N trifluoroacetic acid. The eluate was evaporated to dryness on a rotary evaporator, and the glassy oil remaining was further dried in a vacuum desiccator over PZOs for 2 hours. The reduction was carried out by the addition of 3 ml of 1 M diborane in tetrahydrofuran to Q4C]alanine trifluoroacetate; the reaction was allowed to proceed at room temperature for 2 hours. For isolation of the product, tetrahydrofuran was evaporated under a stream of nitrogen.
The residue was dissolved in 2 ml of anhydrous methanol containing 1 drop of trifluoroacetic acid and allowed to stir for 1 hour. The resulting solution was taken to dryness on a rotary evaporator, and the residue was twice dissolved in 3 ml of methanol and immediately evaporated to dryness each time. This procedure served to remove borate as its volatile methyl ester. The residual material was dissolved in 0.2 ml of HzO. L-2-Amino[14C]propanol was purified by chromatography on a column (0.9 x 36.5 cm) of Dowex 50-X8 resin using 0.2 M pyridinium acetate buffer (pH 3.5) as the developing solvent (15). The aqueous solution of radioactive propanolamine was diluted to 2 ml with 0.2 M pyridinium acetate buffer (pH 3.5), the pH adjusted to 2.2 with 3 N HCl, and the resulting solution applied to the Dowex column.
The radioactive fractions, which were eluted between 180 and 210 ml, were pooled and lyophilized.
The lyophilized material was dissolved in 1 ml of Hz0 and passed through a column (5 x 30 mm) of Dowex l-X8 resin (OH-form).
The eluate was neutralized with 0.05 N HCl.
The over-all yield was 26oi,, and the specific activity of the product was 7.4 X lo5 dpm per pmole.
Assays-Ethanolamine ammonia-lyase was assayed by measuring the rate of conversion of ethanolamine to acetaldehyde and ammonia.
The enzyme was diluted for assay to 0.2 to 0.8 unit per ml in 0.05 M potassium phosphate buffer, pH 7.4, containing 0.05 M ethanolamine hydrochloride.
The assay mixtures contained 0.2 ml of diluted enzyme, 50 pmoles of potassium phosphate buffer (pH 7.4), 100 pmoles of ethanolamine hydrochloride, and 0 or 0.013 pmole of AdoCbl in a total volume of 1.0 ml. The reaction was initiated by the addition of coenzyme and was incubated at 37" for 4 min. All reactions in which coenzyme was used were carried out in the dark. The reaction was stopped by the addition of 0.1 ml of 2 N HCl, and the amount of aldehyde produced was measured calorimetrically (16). Sodium pyruvate was used as a standard for the assay. One unit was defined as the amount of enzyme catalyzing the formation of 1 pmole of acetaldehyde per min. Protein was determined by the method of Lowry et al. (17) with appropriate correction (3). The concentration of adenosylcobalamin was assayed spectrophotometrically at 367 nm after conversion to dicyanocobalamin with KCN, using 30.4 x lo3 M-l cm-' as the extinction coefficient (18), L-2-Aminopropanol was measured by oxidation with periodate and subsequent calorimetric determination of the formaldehyde produced (19). Aldehyde concentrations were determined as in the enzyme assay. n-Alanine was quantitated with ninhydrin cw .
Radiochemical assays were carried out by liquid scintillation counting with a solvent system consisting of 7 g of 2,3-diphenyloxazole, 300 mg of p-bis[2-(9phenyloxazolyl)]benzene, and 100 g of naphthalene in 1 liter of dioxane solution. Radioactivity measurements were made using a Nuclear Chicago Mark I or Packard Tri-Carb (model 3320) liquid scintillation spectrometer. The measurements of radioactivity of the 2,4-dinitrophenylhydrazone derivatives were corrected for quenching by internal To isolate the tritiated propionaldehyde and n-2-aminopropanol produced in these experiments, the following procedure was employed. All procedures involving aldehydes were carried out at 04". After the addition of carrier propionaldehyde and L-2-aminopropanol, the pH was adjusted to pH 5.5 to 6.0 with 0.2 M KzHPO~.
The solution was then treated twice with 10 to 20 mg of charcoal (Darco G60, Matheson, Coleman and Bell) to remove [3H]AdoCbl.
The charcoal was removed by centrifugation, and the supernatant was passed through a column of Dowex 50-X8 resin (5 X 50 mm or 10 X 100 mm, depending on the amount of carrier propanolamine added). Small aliquots were taken for measuring propionaldehyde, propanolamine, and radioactivity content before and after separation on Dowex.
Propionaldehyde was eluted with water; and after washing the column exhaustively with H20, LLaminopropanol was eluted with 1.5 ~rj HCI. The propionaldehyde was isolated as the propionaldomethone derivative. This was prepared as described previously (4) and recrystallized to constant specific activity. The mehing point of the recrystallized derivative (156-157") was in agreement with the reported value (21). The acid eluate containing 2-aminopropanol hydrochloride was taken to dryness on a rotary evaporator.
The residue was twice dissolved in 1 ml of water and each time evaporated to dryness to remove any residual HCI. The material was further dried in vaeuo over PZOS. The 0 ,N-di(p-bromobenzoyl) derivative of 2-aminopropanol was prepared by a modification of the procedure of Jeger et al. (22). 2-Aminopropanol (350 to 1000 pmoles), dissolved in 1 ml of dry pyridine, was mixed with 3 ml of a suspension of 0.6 g of p-bromobenzoyl chloride in pyridine and stirred for 21 hours at room temperature.
Ice water was then added to the reaction mixture.
The precipitate which immediately appeared was collected and dissolved in 150 ml of benzene. This solution was extracted three times with equal volumes of a saturated solution of sodium bicarbonate to remove p-bromobenzoic acid. After being dried over anhydrous sodium sulfate, the organic layer was concentrated until crystals appeared. The material was recrystallized from hot benzene (m.p. 158-159" (uncorrected) ; literature, 155" (22)). The specific activity was the same after each crystallization.
The location of tritium in propionaldehyde was determined by oxidation to propionic acid, while in 2-aminopropanol its location was determined by degradation with periodate. Propionaldehyde was separated from propanolamine by ion exchange chromatography as described above. After purification by bulb-to-bulb distillation, the aldehyde was oxidized to propionic acid with KMn04, maintaining the pH at 6.5 with additions of NaOH (23). Propionic acid was isolated by column dhromatography on silicic acid (24) using 4% l-butanol as developing solvent.
The fractions were titrated and assayed for radioactivity as published previously (4). The 2-aminopropanol eluted from the Dowex column (6.4 pmoles) was dissolved in 1 ml of H20 and was oxidized at pH 6.4 with 1 ml of 0.075 M sodium metaperiodate.
The reaction was allowed to proceed for 20 hours in the cold. Formaldehyde and acetaldehyde, formed in the reaction from the alcohol carbon (C-l) and the remainder of the molecule, respectively, were isolated by distillation. The 2,4-dinitrophenylhydrazone derivatives of these aldehydes were then prepared by adding 3 ml of 1 y0 2,4-dinitrophenylhydrazone in 3 N HzS04 to the distillate.
The method for synthesis of these derivatives, their separation by paper chromatography, and their assay have been described previously (4).
RESULTS
The data presented in Fig. 1 show that ethanolamine ammonialyase catalyzes the conversion of n-2-aminopropanol to propionaldehyde and ammonia.
In the experiment shown, 210 nmoles each of propionaldehyde and ammonia were produced for every nanomole of active site present in the reaction mixture.
The turnover number of about 60 min-' per active site indicates that this catalysis is considerably less efficient than that occurring with ethanolamine, the natural substrate (turnover number 8600 minP (11)). When L-2-aminopropanol was added to the enzyme.
[jH]AdoCbl complex, tritium was transferred from coenzyme to both the product, propionaldehyde, and the starting material, 2-aminopropanol (Table I). Essentially all of the tritium lost from the coenzyme (i.e. not taken up by charcoal) was found in either propionaldehyde or propanolamine. The location of the tritium in each of the two compounds was established by the method described above.
The results, summarized in Table II, showed that 2-aminopropanol was labeled exclusively at C-l, while propionaldehyde was labeled in the a! or p position. Based on earlier enzymatic reactions (4, 7, 25, 26), we conclude that the tritium was located solely on the a! position.
Therefore, the action of ethanolamine ammonia-lyase on rZaminopropano1 is very similar to its action on ethanolamine, except that with (specific activity, 7.4 X 10' dpm per &mole), and 1.5 pmoles of potassium phosphate buffer, pH 7.4, in a total volume of 0.2 ml; (0) 20 units (approximately 1.7 neq in "active sites") of ethanolamine ammonia-lyase, (0) no enzyme. The enzyme was allowed to react with coenzyme for 3 min before 2-amino['%]propanol was added to initiate the reaction. All incubations were carried out at 22". Aliquots of 0.04 ml were removed from the reaction and quenched in 0.5 ml of 0.05 N HCl. Propionaldehyde carrier (5.6 rmoles) was added to each time point. After neutralization, the reaction was distilled, and propionaldehyde isolated as the 2,4-dinitrophenylhydrazone derivative as described under "Materials and Methods." 1685 ethanolamine reversible hydrogen transfer between substrate and coenzyme does not occur (26).
The results of Table III show that incubation of enzyme.
[3H]AdoCbl complex with propionaldehyde, the reaction product, results in the transfer of radioactivity from the coenzyme to a compound or compounds not adsorbed by charcoal, provided ammonia is also present.
In the absence of either ammonia or propionaldehyde, there is no loss of tritium from coenzyme. (In the latter experiments, the tritium which remained in solution after charcoal treatment appears to represent a contaminant of the coenzyme. since the same fraction of tritium remained in solution after charcoal treatment of reaction mixture in which ethanolamine ammonia-lyase was replaced by albumin ("control," Table I).) As expected from the results of Table I, trit- The reaction mixture contained 30.5 units (approximately 2.6 neq of "active sites") of ethanolamine ammonia-lyase, 6.7 nmoles of [3H]AdoCbl (specific activity, 2.95 X 104 cpm per nmole), 53.2 pmoles of L-2-aminopropanol-HCl, 50 pmoles of potassium phosphate buffer (pH 7.4), and 1.1 mmoles of glycerol in a total volume of 1 ml.
The reaction was started by the addition of enzyme and was allowed to proceed for 5 min at 37". A control in which 100 pg of bovine serum albumin were substituted for enzyme was also ium was also lost from coenzyme when propionaldehyde and ammonia were replaced by LZaminopropanol.
Identification of the compounds to which tritium was transferred in the experiments with propionaldehyde led to the surprising discovery that the deamination of propanolamine was a reversible reaction.
This was indicated by the observation that tritium originally in the coenzyme was found not only in propionaldehyde, as expected, but also in propanolamine (Table IV). Tritium from the coenzyme is transferred to both propionaldehyde and propanolamine not only in the experiments presented in t.he table, but also in experiments in which 6% trichloroacetic acid had been used to terminate the reactions and no carrier had been added.
(In these experiment.s, aldehyde and amine were separat,ed from each other on small columns of Dowex 50-XS (H+) according to the method of Babior and Li (ll), the tritium content in the aldehyde-and aminecontaining fractions being determined without furt.her purification.) aminopropanol alone, From these results we conclude that the difference in the partitioning of tritium from enzyme.
[3H]-AdoCbl in the presence of substrate and product, respectively, did not arise from modification of the int,ermediate by interaction with either propionaldehyde or ammonia.
DISCUSSION
The distribution of tritium between L-2-aminopropanol and propionaldehyde was found to depend on whet,her the reaction was started with L-2-aminopropanol or propionaldehyde. The data in Table IV show that starting with L-2.aminopropanol, the rat,io of total trit,ium in L-2-aminopropanol to that in propionaldehyde was 1.8. When t.he reaction was started wit.h propionaldehyde, however, the ratio was 0.3. To establish t,hat the change in the ratio was indeed dependent on the nature of the starting material and was not merely due to an allosteric interaction between the catalytic complex and other components of the reaction mixture, exchange of t.ritium into propanolamine (0.13 pmole) was measured as described in Table III, except that in addition to L-2-aminopropanol, propionaldehyde (6.8 pmoles) and NH&l (6.5 pmoles) were also present.
Under these conditions, the partitioning of tritium between propanolamine and propionaldehyde was the same as that observed with 2- The results reported here show that L-2-aminopropanol is a substrate for ethanolamine ammonia-lyase and is deaminated to propionaldehyde and NH4+. When the reaction was carried out in the presence of [aH]AdoCbl, tritium was transferred both to propanolamine and to propionaldehyde. Tritium derived from the coenzyme was found in the C-l position of propanolamine, while in propionaldehyde, the tritium was located cx to the carbonyl group.
This labeling pattern is consistent with previously proposed mechanisms for this reaction (5,26,27). Therefore, we believe that the interaction of x&aminopropanol with enzyme-coenzyme complex results in t'he formation of intermediates similar to those present in the normal catalytic pathway.
The reaction with L-2-aminopropanol, however, differs from that with ethanolamine in two respects.
(a) With L-2-aminopropanol, tritium from the coenzyme is transferred to the substrate.
No tritium kansfer from coenzyme to substrate can be det.ected with ethanolamine.
No evidence for reversibility has been detected in the reaction with ethanolamine.
The over-all reversibility of t.he deamination of L-2-aminopropanol was established by the appearance of tritiated propanolamine in an experiment in which only propionaldehyde and ammonia were incubat.ed with the enzyme. 13H]AdoCbl complex. Though these results show clearly that the reaction is reversible, they give no indication as to the equilibrium constant for the Table IV show that 25% of the tritium lost from the cofactor appears in propanolamine, the remainder being found in propionaldehyde.
It is likely, however, that the tritiated propanolamine is in equilibrium with the tritium-labeled coenzyme, since the exchange of tritium between coenryme and propanolamine is rapid with respect to the duration of the experiment in Table IV (see Table I). Under such circumstances, it would be expected that the specific activity of the propanolamine would be the same as the specific activity of the tritiated coenzgme, except for a statistical factor of 3 (assuming that t.he product of the react.ion between propanolamine and adenosylcobalamin is 5'-deoxyadenosine) and whatever small equilibrium isotope effect the reaction may display.
The results of Table IV show, on the other hand, that hydrogen exchange between propionaldehyde and coenzyme is far from equilibrium, since the concentration of aldehyde exceeds the concentration of adenosylcobalamin by a factor of 1.7 x 104, while only 18% of the tritium originally in the coenzyme has been transferred to propionaldehyde.
Assuming isotopic equilibration between propanolamine and adenosylcobalamin, the quantity of propanolamine in the reaction mixture is calculated to be 0.072 nmole, compared with 6.8 pmoles of propionaldehyde. The similarity in the amounts of radioactivity can be ascribed to differences in specific activity, the propanolamine being much more highly labeled than the propionaldehyde.
The data in Table IV show that the ratio between the amount of tritium transferred from the coenzyme to substrate and that transferred to product depends upon whether the reaction is started with L-2.aminopropanol or with propionaldehyde and NH,+.
When started with propanolamine, t.he ratio of t,ritium in propanolamine to that in propionaldehyde is 1.8, while when started with propionaldehyde and NH4+, it is 0.3. Since the tritium distribution depends upon whether the reaction is started from the product or substrate side, it can be concluded that there must be more than one intermediate species which can exchange tritium with product or starting material.
If there were only one such species (Scheme l), tritium distribution should be the same whether the reaction is started with propanolamine or Tritiated species are denoted by superscript 3. According to this scheme, the partitioning of tritium between starting material and product, represented by the ratio S3H :PaH, will depend only upon the fate of Z.ZFH; i.e. it will be determined by the relative rates of Steps 3 and 4. In such a scheme, SaH:P3H will be the same whether the reaction is started with SH or PH. Since the observed ratios depend on the nature of the starting material, the single intermediate mechanism is not applicable to the reaction investigated here.
A mechanism consistent with the results is the one shown in Scheme 2, involving two intermediates each of which can ex-&H &H SCHEME 2 In such a mechanism there are conditions under which the SaH : PaH ratio would depend upon the nature of the starting material (i.e. upon whether the reaction is started with SH or with PH). If the rate of interconversion of the two intermediates is slower than the rate of transfer of tritium to starting material (i.e. if kz < k4 and k& < k5), then when SH is the substrate, SaH:P3H will depend upon the value k2/k4, while S3H :P3H obtained with PH as substrate will depend upon k-2/ks. If kz/k4 # k--2.1k5, then the partitioning of t.rit.ium will be different for each of the two starting materials.2 Mechanisms of this type have, if fact, been proposed for reactions involving adenosylcobalamin (5,6,25,27). In these mechanisms, (Z.EH)l and (Z.EH)Q correspond to enzyme-bound complexes consisting of a substrate-cobalamin adduct and 5'deoxyadenosine and a product-cobalamin adduct and 5'-deoxyadenosine, respectively.
The results reported here, therefore, is not proportional to the ratio of concentrations of the two species.
In this table are shown two exchange experiments, one starting with 0.31 mM propanolamine and the other with 1.0 mM propanolamine.
The same amount of tritium was washed out of the cofactor in each experiment, showing that the amount of propionaldehyde produced in the two experiments was the same, i.e. propanolamine was saturating at both concentrations.
The propanolamine to propionaldehyde ratio in one incubation mixture was therefore 3 times greater than in the other throughout the course of the reaction. Despite this circumstance, the p&itioning of tritium between propanolamine and propionaldehyde was the same in the two incubations.
This finding constitutes strong evidence against the alternative mechanism outlined above. change tritium with the coenzyme. in this hydrogen transfer ((I.EH)I, Scheme 2) by thesubsequent irreversible step would ensure that the reversibility of hydrogen transfer would be undetectable experimentally.
We propose that when L-2-aminopropanol is the substrate, the interconversion of the cobalamin adducts ((Z.EH)l * (I.EH)z in Scheme 2) becomes rate-determining, so t,hat the reversibility of formation of the first. complex can now be observed.
Tritium transfer to both substrate and product can, therefore, occur. A consequence of the proposed change in rate-determining step is that when the substrate is propanolamine the amount of the first intermediate (1. EH) 1, which accumulates under steady state condit,ions, should be larger than when it is et,hanolamine.
In a subsequent communicat,ion, we will present, evidence that this is, in fact,, the case. | 5,209.4 | 1974-03-25T00:00:00.000 | [
"Chemistry"
] |
of Cartographic Materials of the National Library of Poland ( NLP )
Access to information about cartographic materials in the National Library of Poland has been improved by new functionalities intended to keep up with users’ new needs and technological trends. From mid-January 2010 on readers of maps and atlases kept in the library can find catalogue descriptions of these materials through the special website. Three different forms of access to descriptions of cartographic materials are available on the website: (1) Catalogues/Catalogues online — database search in the central computer catalogue of the National Library of Poland (NLP); (2) Bibliographies online. Cartographic documents [download .pdf files — in Polish]; (3) Bibliographies online. Cartographic documents [search database] — access to the database in the MAK system containing items from the published bibliography of cartographic
Since the beginning of the 21st century, our catalogue and bibliography have been under revision.First, we revised the rules of cartographic cataloguing in order to adapt them to the NLP library catalogue system and format for books and periodicals which had been introduced earlier.A provisional instruction by Aniela Drozdowska 1 was replaced by the Polish norm, and later -step by step -we created methodological principles for cataloguing maps and atlases.
Methodological Principles of Creating a Catalogue and a Bibliography
Since 2001 all the bibliographical descriptions of maps and atlases in the National Library of Poland have been made according to Polish Standard PN-N-01152-5 Bibliographic Description.Cartographic Materials, 2 whose concept has been elaborated by the staff of the NLP: Maria Janowska from the Centre of Standardisation of the Bibliographic Institute and Lucyna Szaniawska from the Map Department.Our starting point was the norm prepared under the auspices of the International Federation of Library Associations and Institutions ISBD (CM): International Standard Bibliographic Description for Cartographic Materials published in 1987 and its next edition (Revised Version Lucyna Szaniawska 1999). 3The most important recent instructions for describing cartographic materials in Poland include: In preparing some of the description formats, a number of other publications proved to be very helpful, such as the rules and applications contained in the Anglo-American Cataloguing Rules (Second Edition, 1988, chapter 3, Cartographic Materials) Lucyna Szaniawska terenowe [field survey].Also Latin abbreviations, for example 'ca' or 'et al.', were standardized.In the mathematical data field international abbreviations for geographic directions (N, S, W, E) are used, which are functional in the MARC 21 format.
In order to ensure uniformity of the descriptions, many practical problems had to be solved.Because of the fact that in the INNOPAC database of the NLP abbreviations taken from Latin are applied, such as ca, s.l., s.n., et al., Latin words such as verso and recto were also introduced.At present an entry may read, for example, 'Verso: tekst turyst.-krajozn.'[Verso: tourist.text].The less accessible MARC 21 version is also available for use by those librarians who are familiar with the format.The entity of a bibliographic record is an independently published cartographic material, which is most often identified by the publisher by assigning the ISBN number.The materials are registered on the so-called first level of description and for that reason the series of maps and multi-sheet maps as a whole are not described in the bibliography but their sheets which appear under their own title.Figure 2 shows an example of a map description of the Grójec environs published in 2004.In this manner, almost 12,000 maps and atlases from the collection of the Map Department were described in the database.The printed version descriptions were alphabetically organised by geographical entries and then by map scales in descending order.Each issue of the BDK was accompanied by seven indexes: map titles, atlas titles, corporate names, personal names, series of sheets, subjective entries, ISBN number.
The Catalogue and Bibliography of the Cartographic Materials in the NLP
Index sheets constituted an additional multi-sheet maps search tool. 6Considering the fact that vast areas of the country have not yet been sheeted, displaying the availability or lack of maps is very important information for a reader.Index maps used to be prepared (and they still are for the .pdffile version) in cooperation with the Head Office of Geodesy and Cartography and the Hydrographic Office of the Polish Navy, which supervise topographic, hydrographic, environmental and sea navigation maps production.
An additional change is the possibility to retrieve a data field 653 Index Term -Uncontrolled entry from the INNOPAC database, by means of which it is possible to determine more efficiently than previously (by the field 651 Subject Added Entry -Geographic Name subfield 'a' 'Geographic name') the range of the entire object described.In this way supplementary information could be added to names, e.g.Bytnica (okolice [= environs]) -which means that the map represents the town with its unspecified surroundings.Illustration of another written form are elements of sheets description of multi-sheet maps -e.g.'Brzeźnica (gmina [= commune])' means that the map represents the area of the commune together with the seat of its authorities, and 'Zamość' indicates a general plan of the town.Other elements of the map and atlas description have been left unchanged (Figure 4).
Together with the introduction of the .pdfformat, the BDK introduced new presentation and navigation tools connected with this format.One of them is the possibility to use bookmarks during the search of the BDK number.
The .pdf file functions in a similar way as the tables of contents in books do and after clicking on them they refer the reader straight to particular parts of the bibliography: divisions and subdivisions of UDC and particular indexes.The electronic form of the bibliography contains all indexes previously included in the printed version, i.e. map titles index, atlas titles index, causative institutions index, personal name index, ISBN numbers index, series sheets index and subject headings index.To the indexes themselves some new search tools have been added, namely links between each description number given in an index entry and the description to which the entry pertains.For example, in the map title index, right after the title 'Białystok : plan miasta 1:17 000' the description number has been marked with purple, and by clicking on it the user goes directly to the full description of the map.
As stated before, the characteristic feature of the Polish 'Bibliography of Cartographic Materials' was the fact that it included printed index maps.Moreover, in 2009 sheet tablets for land maps of the following series were included: Mapa hydrograficzna Polski 1:50 000, Mapa sozologiczna Polski 1:50 000, Mapa topograficzna Polski 1:10 000 as well the series of many scales charts: International Charts Series and Morskie Mapy Nawigacyjne.The images of indexes were put in jpeg format (Figure 5).After each display of a series title in a full bibliographic description, links to a proper sheet tablet series were added. 7In the near future we plan to add links to each cell symbolising a map sheet in the index to its bibliographic description.A file in .pdf is maybe not the most modern search tool, but it has two very important advantages: it is relatively cheap in production, it does not require any special tools for text searching within a bibliographic number and it provides the reader with an opportunity to print out some material derived from the published editions up to 2007, for his own needs.
(3) The third way of accessing the 'Bibliografia Dokumentów Kartograficznych = Bibliography of Cartographic Materials' is the database of the National Library of Poland in the MAK system, which was installed on NLP staff's computers in the end of the 1980s.Since 1991 'Przewodnik Bibliograficzny' [= Bibliographic Guide] has been working with this system, cataloguing all the books published in Poland, among which there were also maps and atlases.
Cartographic materials were, in fact, described just like books; it was their first current registration.Since 1998 the MAKWWW system has been in operation.It has been reprogrammed to be used on the http://mak.bn.org.plweb page.Since 2002, it has been used provide access to many different library databases, e.g. the national bibliography and special bibliographies.Among others there is a database: cartographic materials.It was installed on 20 October 2008 and it contains 5,630 bibliographic records.Untill May 2010 it was visited by 9,820 readers from outside the NLP building.Naturally, there are typical choice options which serve as search tools and which are used in other library collections as well, such as books, periodicals, journal articles, electronic and audio documents.
Basic searches can be made by selecting the author's name, the title, the keyword, the subject access entry, the publisher's name, international number (ISBN or ISSN), the number of the bibliography item, the year of publication and the series title.Another method, which is actually a form of advanced search, makes it possible to narrow down a search area with the help of several (up to five) criteria, joined by the conjunction 'or' and 'and' (Figure 6).Another way of advanced search consists of choosing an entry in the index combined with the possibility of replacing the ending of a word with an asterisk.
The interface for bibliographic descriptions in the MAKWWW database for cartographic materials has a very clear typeface when compared to other similar databases in the leading European libraries (Figure 7).Because the only documents described are maps and atlases, the search for maps on a certain subject or maps of a certain area is very fast and precise.A similar search in the central database, to which over one million four hundred thousand descriptions of library entities have been added without the use of an advanced search query, renders too many results.For example, when one enters 'Warszawa' into the central database as the beginning of the title, the catalogue returns 1,733 hits, among which only a few concern maps -most of the hits concern books, guides, albums, prints, etc.If one enters 'Warszawa' as a subject in a subject entry, the result is even more useless -over 10,000 hits. 8However, a similar search query by means of an advanced search, with the delimiter 'DOK.KART.' [= MAP in an English catalogue interface] returns 290 to 608 hits.In the MAKWWW database 48 titles (tytuł in Polish) starting with the phrase 'Warszawa' and 203 subject entries (hasło_przedm in Polish) with the subject 'Warszawa' will be found.
Unfortunately, the inconvenience of this database is that only maps and atlases produced since 2000 have been included at present; however, a retrospective inclusion of cartographic materials published after World War II is being planned together with cartographic materials published since 1928, which is the date when 'Przewodnik Bibliograficzny' first started.
A marginal remark: the criteria of advanced search are far more challenging for a reader than the previous tools.For example, if one searches something by means of the keyword 'Warszawa' and limits the search to the type of document 'DOK.KART.' [MAP in the English Catalogue interface] by means of the delimiter 'and', one will get 7,870 hits.In the MAKWWW database the corresponding outcome is 301 hits.This shows that the search types have been incorrectly used.Using the name 'Warsaw' as a keyword, one must be aware of the fact that the hits will include not only appearances in the title (and not only at the first position) but also in the place of publication, the place of printing, in the corporate name and, in the notes, cartographic material which does not represents Warsaw at all (Figure 8).
The work on providing access to online descriptions of cartographic materials in the databases of the NLP catalogue and bibliography has greatly accelerated this year.However, there are still many years to come before the library staff will be able to say: 'it is done' and the entire, still growing collection will be catalogued.It is also necessary to add some modern navigators presenting images of high resolution maps.Still, by providing online access to data, the NLP has joined the ranks of other leading European libraries.It is only fairly recently that many European libraries began discussing the option of moving from analogue to online bibliographies of cartographic materials.
Some countries, such as France in 2000 and Denmark in 2004, ceased printed publication altogether, but there are still some countries, like Germany, which print map and atlas bibliographies. 9As a matter of fact, however, almost every library has a collection of catalogues available online or they burn them on CD's in order to be able to sell them.In Poland, it appears, returning to the printed form is neither possible nor justified, because every reader can download the file and print it.Polish Cartographical Review, Vol.39, 2007, No. 3, p. 271-273. The Bibliography does not contain a description of the cartographic materials in electronic form, because they are registered together with other electronic documents in addition to 'Bibliographic Guide' entitled 'Bibliografia Dokumentow Elektronicznych' [= Bibliography of Electronic Resources].However, the electronic materials accompanying the printed cartographic materials are taken into consideration.
7 A similar method, although realised by means of other technical solutions, was used by Pan ´stwowy Instytut Geologiczny [= Polish Geological Institute].Textual bibliographical descriptions have been connected with the indexes available on the institute webpage with links, which enabled accessing information about map sheets.
8 Since the beginning of 2001, research on improving the utility of a subjective entry has been conducted; among others Field 655 Index Term -Genre/Form has been used.A considerable improvement may only prove possible after the database itself is updated.To be fairt, the geographical names 'Warszawa' and 'Polska' are, for obvious reasons, extreme examples in the Polish library database. 9The presentation of bibliographies in chosen countries was a subject of a note by Lucyna Szaniawska entitled Polska 'Bibliografia Dokumentów Kartograficznych' na tle wybranych bibliografii w innych krajach europejskich [= Bibliography of Cartographic Materials in comparison with the chosen bibliographies in the European countries], in: 'Polish Cartographical Review ', Vol. 38, 2006, No. 38, No. 4, p. 326-329.
and explanations and examples included in the Handbook for AACR 2 Explaining and Illustrating Anglo-American Cataloguing Rules (Second Edition; American Library Association, 1988).In certain cases, international rules compiled by and published on the IFLA web site since 2004 as the ISBD(CM) International Standard Bibliographic Description for Cartographic Materials (200x Revision), were also taken into consideration.Furthermore, bibliographic descriptions of atlases may be based on another standard: PN-91-N-01152-8 Opis bibliograficzny.Stare druki [= Bibliographic Description.Early Printed Books], published 1994 and Format MARC 21 rekordu bibliograficznego dla starych druków [online] [= MARC 21 Format for Bibliographic Description.Early Printed Books] Elaborated by -[Warszawa]: Centrum NUKAT, December 2007 (available on the Internet).
Fig. 1 :
Fig. 1: Standard description scheme of a cartographic object in the 'Bibliography of Cartographic Materials'.
Figure 1
Figure 1 shows how the rules were applied to the standard description scheme.It shows that the third level of detail is used, which includes all the elements allowed for in the Polish Standard.Every description is preceded by a password created from a geographic name which most closely denotes the map's range, plus the scale of the dominating part of object.When recording map data the abbreviations used are in accordance with Polska Norma PN-85/N-01158 Skróty wyrazów i wyrażeń w opisie bibliograficznym [= Polish Standard PN-85/N-01158 Abbreviations of words and phrases in bibliographic description], which has been in force since 1987.The standard does not cover descriptive phrases used quite often by Polish authors of maps and atlases and words generally connected with their content.Therefore, correct bibliographic description of cartographic materials required the introduction of the following abbreviations: aktual.[update], dł.geogr.[longitude], dystr.[distribution], geogr.[geographic], hydrogr.[hydrographic], kartogr.[cartographic], krajozn.[heritage tourism], płd.[southern], szer.geogr.[latitude], tech.[technical], topogr.[topographic], turyst.[tourist], kolor.[colour], zdj.
( 1 )
Since 2002 all bibliographic descriptions of maps and atlases were entered in MARC 21 format and added to a computer catalogue database installed on the NLP server, which operates on INNOPAC belongs to the Map Department.In the OPAC (open public access catalogue) a special user version of the record is shown under 'catalogues' in the NLP online catalogue, which facilitates quick access.
Fig. 2 :
Fig. 2: A screenshot of an exemplary tourist map from the "CATALOGUES" screen.
Fig. 3 :
Fig. 3: An outline of UDC divisions and subdivisions applied to the BDK.
Fig. 4 :
Fig. 4: A screenshot of a description of exemplary maps from the BDK screen in a .pdffile.
Fig. 6 :
Fig. 6: Various methods of advanced search in the cartographic materials database in the MAK system.
Fig. 7 :
Fig. 7: Screenshot with a bibliographic description of a tourist map of the Karkonosze Mountains.
Fig. 8 :
Fig. 8: Advanced search results by means of the keyword 'Warszawa'.
2) In 2009, the printed version was superseded by an electronic .pdffile,which is commonly used for displaying texts on the Internet.Up to now, the following new 'BDK' issues have been created: 2007 No. 1-2 and 2009 (January-December, containing mainly descriptions from the years 2008-2009).Maps and atlases which were not registered in the previous issues but were sent to the NLP until the end of 2009, were also added.Besides the medium of publication, quite a few other changes have been introduced, including a new configuration of the main part of the bibliography number.For the first time a bibliographic descriptions arrangement has been introduced which is compatible with the UDC standard subdivisions order.Symbols to be used in the bibliography were derived from UDC tables published in Uniwersalna Klasyfikacja Dziesiętna: publikacja nr UDC-PO58 autoryzowana przez Konsorcjum UKD nr licencjiUDC-2005/06.Wydanie skrócone dla bieżącej bibliografii narodowej i bibliotek publicznych [= Universal Decimal Clasification: Edition No. UDC-PO58 authorised by UDC Consortium License No. UDC-2005/06.Short edition for national bibliography and public libraries].The simplified list of UDC symbols is presented near the introduction.Only within divisions and subdivisions are the descriptions sorted in alphabetical order according to the names of geographical entries (Figure New version of this norm: IFLA International Federation of Library Associations and Institutions ISBD(CM): International Standard Bibliographic Description for Cartographic Materials.200x Revision.Prepared by the ISBD(CM) Working Group for approval by the Standing Committees of the IFLA Cataloging Section.http://www.ifla.org/VII/s13/pubs/ISBD(CM)_21Dec04.pdf 4 Basic information on the manner of map and atlas registration in 'Bibliography of Cartographic Materials' is to be found in each issue of the bibliography.It is the subject of Jerzy Ostrowski's review 'Bibliography of Cartographic Materials' in 3 | 4,181.8 | 2010-09-29T00:00:00.000 | [
"Computer Science"
] |
Genomic characterization of extended-spectrum β-lactamase-producing Enterobacterales isolated from abdominal surgical patients
Rectal swabs of 104 patients who underwent abdominal surgery were screened for ESBL producers. Sequence types (STs) and resistance genes were identified by whole-genome sequencing of 46 isolates from 17 patients. All but seven isolates were assigned to recognized STs. While 18 ESBL-producing E. coli (EPEC) strains were of unique STs, ESBL-producing K. pneumoniae (EPKP) strains were mainly ST14 or ST15. Eight patients harboured strains of the same ST before and after abdominal surgery. The most prevalent resistant genes in E. coli were bla EC (69.57%), bla CTX-M (65.22%), and bla TEM (36.95%), while bla SHV was present in only K. pneumoniae (41.30%). Overall, genes encoding β-lactamases of classes A (bla CTX-M, bla TEM, bla Z), C (bla SHV, bla MIR, and bla DHA), and D (bla OXA) were identified, the most prevalent variants being bla CTX-M-15, bla TEM-1B, bla SHV-28, and bla OXA-1. Interestingly, bla CMY-2, the most common pAmpC β-lactamase genes reported worldwide, and mobile colistin resistance genes, mcr-10-1, were also identified. The presence of bla CMY-2 and mcr-10-1 is concerning as they may constitute a potentially high risk of pan-resistant post-surgical infections. It is imperative that healthcare professionals monitor intra-abdominal surgical site infections rigorously to prevent transmission of faecal ESBL carriage in high-risk patients.
Introduction
Extended-spectrum β-lactamase-producing Enterobacterales (ESBL-PE) are a serious global health concern for transmission of multidrug-resistant organisms, particularly Escherichia coli and Klebsiella pneumoniae.Hospital-acquired infections, including surgical site infections caused by ESBL-PE, are associated with considerable morbidity and mortality [1].Contaminated surgical wounds and medical devices, along with admission to hospital more than 24 hours before surgery, were identified as the most statistically significant risk factors in a recent study [2] and underline the need for preventive measures to improve surgical outcomes [3].
We investigated whether faecal carriage of ESBL organisms in patients before abdominal surgery constituted a source of post-surgical infections in these subjects.Isolates recovered from rectal swabs of 104 patients 1 day before and up to 3 days post-surgery were characterized by molecular characteristics and ESBL resistance genes to confirm prior colonization with and persistence of ESBL-PE strains.
Materials and methods
Rectal swabs were cultured on selective CHROMagar ESBL (SIGMA-ALDRICH, St. Louis, USA) and MacConkey agar (Becton, Dickinson, Sparks, USA).Isolates were identified to the species level by standard biochemical tests.Combination disk diffusion tests [3] were performed for phenotypic confirmation of the presence of ESBLs using appropriate control reference strains.ESBL production was confirmed by an increase of ≥5 mm using combination disks of cetazidime (30 μg)/clavulanate (10 μg) or cefotaxime (30 μg)/ clavulanate (10 μg) compared against CAZ (30 μg), or CTX (30µg) alone.K. pneumoniae ATCC 700603 and E. coli ATCC 25922 were included as ESBL-positive and ESBL-negative controls.
Detection of resistance genes by whole-genome sequencing
Of the 104 patients who underwent abdominal surgery from July 2018 to March 2019, 31 were positive for ESBL-producing E. coli and K. pneumoniae in their faecal flora.From the 17 patients who yielded ESBL-PE organisms on pre-and post-surgical screening, 46 isolates were recovered, except for one patient (number 30) where ESBL-KP and KP R phenotypes were found only in the postoperation specimen.The 46 selected isolates were subjected to whole-genome sequence analysis.
The quantity and quality of DNA extracts were determined by gel electrophoresis and fluorescent measurement by Qubit assay (Thermo Fisher Scientific, Vilnius, Lithuania).DNA libraries were constructed using MGIEasy FS DNA library kit and sequenced with a DNBSEQ-G400 sequencer (MGI Tech, Shenzhen, China).All isolates underwent a quality control process.Reads with a mean quality score <Q30 or length <36 base pairs were discarded.KRAKEN2 (v2.1.2) [4] was used to remove unclassified reads, and de novo assembly was performed with Unicycler (v0.5.0).
Results
All isolates were identified to species level as E. coli and K. pneumoniae classified by the sequence taxonomic database.Species classifications were confirmed to be correct except for two isolates SK106 and SK128 from patient numbers 24 and 29, respectively, which were reassigned from E. coli to Enterobacter roggenkampii (EER) (Supplementary Table S1).All but 7 isolates were assigned to an ST, and in total, 23 different STs were identified (Supplementary Table S1).E. coli isolates exhibited the greatest heterogeneity with 20 STs, whereas K. pneumoniae isolates were mainly ST14 and ST15.Almost all isolates from pre-and postsurgical samples shared the same ST, and isolates from 8 of the 17 patients fell in the same type.Notably, the K. pneumoniae isolated from patient number 30 belonged to the same genotype (ST14) as with others of this patient's isolates but harboured different resistance genes.
Sequence analysis revealed the presence of bla genes in addition to other resistance genes.The most prevalent bla ESBL genes in E. coli were bla EC (69.57%), bla CTX-M (65.22%), and bla TEM (36.95%), whereas bla SHV predominated in K. pneumoniae (41.30%) alone (Supplementary Table S2).The bla genes found from the isolates at pre-and post-surgery were generally of the same prevalence.Most patients, except for six individuals, had almost the same resistance gene profiles of isolates pre-and post-abdominal surgery (Supplementary Table S2).
Discussion
Different database or input sequence formats were used for sequence data analysis, leading to different drug resistance identification results.Consequently, multiple databases were used to provide more accurate data.For example, ResFinder identified the SHV gene when using non-assembled sequences as input, but this gene was not flagged when using assembled sequences in samples SK116, SK125, SK126, and SK127.It is possible that the process was unable to assemble the SHV sequence due to the known performance limitation of de novo assembly on short read data.However, the unknown ST and missing taxonomy classifications were recalled from KRAKEN2, which contained multiple taxonomical profiles of various species.Long read sequence data is therefore necessary for further approaches.bla CTX-M , bla TEM and bla SHV are the most prevalent of the many ESBLs detected in various pathogens, and consequently, they have become widely disseminated across various epidemiological niches.A previous study found SHV to be distributed mostly among K. pneumoniae [9], and here, it was found only in this species.However, variants of the SHV type have been detected in other members of the Enterobacterales family and Acinetobacter baumannii [10,11].
In this study, the presence of the same ST types of strains present at pre-and post-surgery was interpreted as being indicative of colonization of the patient's gut by ESBL producers and other resistant strains before surgery.Plasmid-mediated resistance genes are readily transferable and often spread from one bacterium to another.It follows that the persistence of such strains can give rise to hazardous and difficult-to-treat post-surgical site infections.Hence, screening of patients before, and after, surgery to confirm persistent carriage of ESBL-PE strains is of practical benefit and increases clinical awareness of their potential transmission during surgery.
The multi-resistant EPEC ST131 strain has been reported worldwide due to its high risk of gastrointestinal tract infection and sometimes progression to urinary tract infection and septicaemia.It is also widely distributed as a colonist among healthy individuals and animals [3,12,13].This genotype is particularly associated with several resistance genes, particularly bla CTX-M [13].The isolates harbouring bla CMY-2 , which is the most common pAmpC β-lactamase gene reported worldwide [14], and mcr-10.1 present a potentially high risk of infections during abdominal surgery in this study.
Colistin was only relatively recently introduced as the last available antibiotic for combatting multiple drug-resistant bacterial infections [15], but the presence of its resistance gene, mcr, in this study indicates that genetically mobilized colistin-resistant strains pose an emerging threat due to their associated high risk of morbidity and mortality.Variants of the mcr gene including mcr-1 through mcr-10 have been identified in many bacteria globally [16].
In patient number 24, an EPEC strain was isolated before surgery and an ESBL-producing EER after surgery.Both isolates were positive for bla TEM-1B and bla CTX-M-3 .These genes and bla MIR- 1 , a plasmid-mediated class C (group 1), confer resistance to oxyimino β-lactams.They were detected in EER, while bla CMY-2 was found in EPEC.The presence of the plasmid-mediated genes of the two species may result in their potential transfer between the strains during intestinal carriage.It is widely accepted that appropriate antibiotic use for prophylaxis is essential to reduce infections in high-risk patients.Likewise, guidelines for appropriate drug prescriptions for such individuals should be evaluated, and patients should be actively screened for carriage of ESBL producers and other resistance genes before surgery.
Extended-spectrum β-lactamase producers were not detected in 120 healthy adults as previously reported from a tertiary Thai hospital [17].However, ESBL-producing E. coli and K. pneumoniae multidrug-resistant isolates were recently reported in approximately 30% of an elderly population living at home who had undergone abdominal surgery [18].
In conclusion, phenotypic and genotypic characteristics of a collection of isolates of ESBL-producing E. coli and K. pneumoniae and other plasmid-mediated resistant strains, especially mobilized colistin resistance gene mcr, is necessary to arrest their potential spread.This study provided detailed information on the species distribution and their resistance genes, which will aid prevention and control of post-abdominal surgical infections, and the spread of resistance genes.
Supplementary material.The supplementary material for this article can be found at http://doi.org/10.1017/S0950268824000578.
Table 1 .
Distribution of bla genes among 46 strains isolated from rectal swab of patients at pre-and post-abdominal surgery | 2,123.8 | 2024-04-12T00:00:00.000 | [
"Medicine",
"Biology"
] |
Symmetric 1,1'-Dimethylferrocene-Derived Amino Acids: Their Synthesis, Characterization, Ligational and Biological Properties With Cu(II), Co(II) and Ni(II) Ions
Some novel symmetric 1,1′-dimethylferrocene derived amino acids have been prepared by the reaction of 1,1′-ferrocenedimethyldichloride with amino acids (glycine, alanine, phenylalanine and tyrosine). Their Cu(II), Co(II) and Ni(II) complexes, of the type [M(L)] where [M = Cu(II) and L = L1-L5] and [M(L)Cl2] where [M-Co(II)and Ni(II), L = L1-L5] have been prepared. The dicarboxylic acids and their metal complexes were characterized by their physical, analytical and spectral data. The [M(L)] complexes showed a square planar geometry whereas an octahedral geometry was observed for [M(L)Cl2] complexes. The title dicarboxylic acids and their metal complexes have also been screened for their antibacterial activity.
INTRODUCTION
There are significant evidences 1-4 that amino acid complexes are potentially used in the treatment of tumors. Various tumors tend to have poor blood supplies and therefore, amino acids have been effectively used to direct nitrogen mustards into the cancer cells. For example, phenylalanine mustard is used in controlling malignant myeloma and Burkitts' lymphoma and, similarly sarcolysine is used to treat wide range of tumors. Indeed, certain tumors and cancer cells are unable to produce all the amino acids synthesized by the normal cells. Therefore, these cells require an external supply of such essential amino acids to pass on to the cancer cells by the blood stream. In the recent past a number of studies 82 have highlighted the utility of ferrocene and its derivatives in various applications 137. Very few ferrocene-derived compounds have been used as ligands for the complex formation reactions. Keeping in view the significance of amino acids and their complexes as chemotheraptic agent and the chemistry of ferrocene or ferrocene-containing compounds as stable intermediates, a successful effort to join the chemistry of amino acids and ferrocene is made. For this purpose, some novel symmetric 1,1'-dimethylferrocene derived amino acids (Figure 1) have been synthesized and studied for their physicochemical, ligational behavior with Cu(II), Co(II) and Ni(II) metal ions and also for their antibacterial properties against bacterial strains, Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa and Material and Methods All solvents were used as Analar grade. 1, l'-Ferrocenedimethanol was obtained from Merck. l,l'-Ferrocenedimethyldichloride derivative was prepared by a reported method 8 using thionyl chloride in triethylamine. All metals were used as chlorides. IR, 1H NMR and 13C NMR spectra were recorded on Philips Analytical PU 9800 FTIR and Brucker 250 MHz instruments. UV-Visible spectra were obtained on a Hitachi U-2)00 double-beam spectrophotometer. Conductance of the metal complexes was determined in DMF at 10"dilution on a YSI-32 model conductometer. Magnetic measurements were done on solid complexes using the Gouy method. The synthesized dicarboxylic acids and their metal complexes were analyzed for C, H and N by Butterworth Laboratories Ltd. Melting points were recorded on a Gallenkamp apparatus and are uncorrected. Synthesis of Dicarboxylic acids A mixture of l,l'-ferrocenedimethanol (0.65 g, 3.0 mmol), triethylamine (0.60 g, 6.0 mmol) and dichloromethane (20 mL) was cooled in an ice bath. Thionyl chloride (0.71 g, 6.0 mmol) in dichloromethane Symmetric 1,1-Dimethylferrocene-Derived Amino Acids: Their Synthesis, Characterization, Ligational and Biological Properties with Cu(II), Co(II) and Ni(II) Ions (20 mL) was added into this mixture under N2 at such a rate to keep the temperature between 15-20 C. After complete addition, the reaction mixture was kept at 20 C for 30 minutes and then stirred at 40 C for another 30 minutes. Ice was added and mixture stirred for another 5 minutes. A small amount of NaHCO3 was then added to obtain pH 6.0. The organic layer was separated and dried over CaClz. Filtration and evaporation of the solvent gave a dark brown solid which was dissolved in dichloromethane (20 mL) and each amino acid (glycine, alanine, phenylalanine or tyrosine) (1.25 mmol) in dichloromethane (20 mL) was individually added to it. The reaction mixture was refluxed for 5 h under a slow stream of N2. After allowing to cool to room temperature solvent was evaporated to give a yellow-orange solid which was recrystallized from chloroform.
Synthesis of Metal Complexes
Dicarboxylic acid (1.0 mmol) was dissolved in ethanol (30 mL) and warmed for several minutes. A solution of metal (II) chloride (1.0 mmol) in ethanol (20 mL) was added to the above solution. Then 2-3 drops of conc H2SO4 were added and mixture was refluxed for 3 h. During this time, precipitate was formed that was filtered, washed several times with warm ethanol and diethyl ether and, then dried over anhydrous CaCI2.
Antibacterial Studies
The synthesized metal chelates in comparison to the ligands were screened for their antibacterial activity against pathogenic bacterial strains, Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa and Klebsiella pneumonae. The paper disc diffusion method was adopted for the determination of antibacterial activity19'2. IR Spectra
RESULTS AND DISCUSSION
The important infrared frequencies of the uncomplexed dicarboxylic acids and its complexes along with their assignments are given in Tables and 3, respectively. The IR spectra of the dicarboxylic acids show characteristic absorption bands at~3372,~1822 and~1550 cm due to the v(NH), v(COOH) and v(C=C) stretching vibrations 22 respectively. The bonding of the dicarboxylic acids to the metal atoms was investigated by comparing IR spectra of the free dicarboxylic acids with those of their metal complexes. The spectra of the complexes show significant changes as compared to that of the dicarboxylic acids. It can be seen that the bands due to the v(NH) move towards lower frequency by 5-10 cm indicating their coordination to the metal atoms through the v(NH) group. Deprotonation of the v(COOH) was also indicated in the spectra of the complexes as the band due to the v(COO) was observed at~1575 cm which in turn, showed complexation through a deprotonated (COOH) group. Moreover, in the far IR region, three new bands around 365, 415, 450 cm 1 assigned to the v(M-CI), v(M-N) and v(M-O) modes 23 respectively were found in the spectra of the Co(lI) and Ni(II) complexes and not in the spectra of the dicarboxylic acids. The stretches due to the v(M-N) and v(M-O) were only found in the spectra of the Cu(II) complexes, however, bands due to the v(M-CI) in the far IR region were not found in the spectra of the Cu(II) complexes which indicated that the Co(II) and Ni(II) complexes possess an octahedral and the Cu(II) complexes a square planar geometry.
NMR Spectra
The H NMR and 3C NMR spectra of the free dicarboxylic acids as well as some of their metal complexes, taken in DMSO-d6 are listed in Table 4. The free dicarboxylic acids exhibited signals due to all the expected protons and carbons in their expected region and have been identified from the integration curve found to be equivalent to the total number of protons deduced from their proposed structures. These were identical to those reported 24"28 signals of the known compounds and therefore, gave further support for the compositions of these new dicarboxylic acids and their complexes as suggested by their IR and elemental analyses data. When these shifts were compared to those of the corresponding complexes, they exhibited a shift of some resonances. In each case, a broad singlet occurring downfield at 5 8.7-8.9 ppm assigned to (NH) undergoes a shift towards higher field by 0.15-0.2 ppm in the complexes. Also, the protons due to (COOH) found in the spectra of the dicarboxylic acids at 5 11.7-11.9 ppm suggested 29 Table 4. The spectra of other metal complexes showed similar characteristic features except the shift (0.5-1.5 ppm) of signals and therefore, are not included in Table 4.
Electronic Spectra and Magnetic Moments
The electronic spectra of the Cu(II) complexes showed two weak low-energy bands at 15150-16355 cm and 18770-19585 cm and a strong high-energy band at 30345-31770 cm. The low-energy bands are in positions characteristic for a square planar configuration and may be assigned to 2Big 2Ag and 2Blg -4 2Eg transitions, respectively3'3. The strong high-energy band is assigned to metal -4 ligand charge transfer.
Also, the magnetic moment values (1.5-1.9 B.M) for the Cu(II) complexes were found to be consistent with the proposed square planar structure (Fig 2A).
Characterization, Ligational and Biological Properties with Cu(II), Co(II) and Ni(II) Ions Table 3 36 also an octahedral geometry for the Ni(ll) complexes ( Fig 2B).
Furthermore, a broad band centered at 22150-22500 cm " observed for every complex was assigned 3 to the transition IAig ---) IElg in the iron atom of the ferrocenyl group indicate that there is no magnetic interaction between the Cu(II), Co(ll) and Ni(II) ions and the diamagnetic Fe(lI) ion.
Based on the above observations, it is proposed that the Cu(II) complexes have a square planar geometry (Fig 2A) whereas the Co(II) and Ni(II) complexes are octahedral (Fig 2B). The synthesized dicarboxylic acids and their metal complexes were evaluated for their antibacterial activity against Escherichia coli (a), Pseudomonas aeruginosa (b), Staphylococcus aureus (c) and Klebsiella pneumonae (d). The compounds were tested at a concentration of 30 lag/0.01 mL in DMF solution using the paper disc diffusion method. The susceptibility zones measured in mm are reported in Table 5. The susceptibility zones were the clear zones around the discs. All the dicarboxylic acids were found to be biologically active and their metal complexes showed more significant antibacterial activity against one or more bacterial species in comparison to the uncomplexed ligands. In most of the cases chelation tends to make the dicarboxylic acids act as more powerful and potent bactericidal, thus killing more of the bacteria than the parent dicarboxylic acids. A possible explanation for the increased activity of the complexes is proposed. It may be suggested that in the chelated complex, the positive charge of the metal is partially shared with donor atoms and there is x-electron delocalization over the whole chelate ring. This increases the lipophilic character of the metal chelate and favors its permeation through lipoid layers of the bacterial membranes. | 2,293.2 | 2000-01-01T00:00:00.000 | [
"Chemistry"
] |
Current Uses of Mushrooms in Cancer Treatment and Their Anticancer Mechanisms
Cancer is the leading cause of mortality worldwide. Various chemotherapeutic drugs have been extensively used for cancer treatment. However, current anticancer drugs cause severe side effects and induce resistance. Therefore, the development of novel and effective anticancer agents with minimal or no side effects is important. Notably, natural compounds have been highlighted as anticancer drugs. Among them, many researchers have focused on mushrooms that have biological activities, including antitumor activity. The aim of this review is to discuss the anticancer potential of different mushrooms and the underlying molecular mechanisms. We provide information regarding the current clinical status and possible modes of molecular actions of various mushrooms and mushroom-derived compounds. This review will help researchers and clinicians in designing evidence-based preclinical and clinical studies to test the anticancer potential of mushrooms and their active compounds in different types of cancers.
Introduction
Cancer is one of the leading causes of deaths worldwide, and accounted for nearly 10 million deaths in 2020 according to WHO [1]. Accurate cancer diagnosis is important for effective treatment because each type of cancer requires a specific regimen. Cancer treatments are diverse, including surgery, anticancer drug treatment, radiotherapy, and systemic therapy (chemotherapy and targeted biological therapies). However, the current anticancer drugs available in the market are not target-specific, leading to the development of drug resistance, and even causing several side effects in clinical chemotherapy [2]. Therefore, it is important to develop novel and effective anticancer agents with low toxicity. In this regard, natural compounds have been highlighted as anticancer drugs. Mushrooms have been used in traditional medicines in East Asia due to their immunomodulatory, anticancer, and anti-inflammatory activities [2]. Among 14,000 different species of mushrooms, approximately 700 species have been reported to exhibit medicinal properties [3]. Recently, many studies have revealed the biological activities and the mechanisms of actions of mushroom compounds [2][3][4][5]. Some mushrooms and their active compounds possess anticancer properties. Polysaccharides isolated from Phellinus linteus (PLP) suppressed tumor growth and pulmonary metastasis through stimulating the immune response, not directly toxic to cancer cells [4]. Triterpenoids from Ganoderma lucidum showed anticancer properties [5]. β-D-glucans from Ganoderma lucidum exhibited anticancer effect by inhibiting cancer cells, protecting normal cells against free radicals, and reducing normal cell damage [2]. Their potential use as adjuncts in cancer therapy or as anticancer agents has emerged. Numerous clinical trials are in progress to assess the benefits of medicinal mushroom extracts in chemotherapy [2].
In this review, the clinical use and anticancer mechanisms of mushrooms are described. Our goal is to provide information pertaining to the potential therapeutic use of mushroom extracts and their active compounds against various cancers by elucidating the underlying targeted signaling pathways. Furthermore, these mushroom-derived compounds with anticancer activities can be exploited as novel anticancer agents.
Uses of Mushrooms in Cancer Therapy
Many groups have reported that mushrooms possess anticancer activities and minimize undesirable side effects such as nausea, bone marrow suppression, anemia, and insomnia, and lower drug resistance after chemotherapy and radiation therapy [6] (Tables 1 and S1). In the RCT conducted by Tsai et al., advanced adenocarcinoma patients treated with Antrodia cinnamomea alongside chemotherapy developed less severe gastrointestinal symptoms, such as abdominal pain and diarrhea, than those in the placebo group [7]. Twardowski et al. reported that Agaricus blazei Murill decreased prostate-specific antigen (PSA) levels and regulated recurrent prostate cancer by decreasing immunosuppressive factor [8]. Ahn et al. reported that patients with gynecological cancers receiving chemotherapy showed fewer side effects, such as loss of appetite, alopecia, and general weakness, when the therapy was accompanied by Agaricus blazei Murill compared to those in the placebo group [9]. Hetland et al. demonstrated that there was an increased number of plasmacytoid dendritic cells (pDC) and T regulatory cells (Tregs) in the blood; increased serum levels of IL-1Ra (receptor antagonist), IL-5, and IL-7; and an increased level of immunoglobulin genes, killer immunoglobulin receptor (KIR) genes, and human leukocyte (HLA) genes in the bone marrow in Agaricus blazei mushroom extract (AndoSanTM)-treated myeloma patients [10]. Loss of appetite decreased over time in patients that underwent six cycles of chemotherapy accompanied with Agaricus sylvaticus, while most patients in the placebo group suffered from loss of appetite and gastrointestinal symptoms, such as diarrhea, constipation, nausea, and vomiting [11]. In advanced lung cancer, 3-84% of patients receiving Ganoderma lucidum exhibited significantly improved cancer-related symptoms (e.g., fever, cough, weakness, sweating, and insomnia) compared to the placebo group (11-43%) [12]. In the randomized controlled trial (RCT) conducted by Zhao, breast cancer patients treated with Ganoderma lucidum showed less cancer-related fatigue than patients undergoing endocrine therapy [13]. In a phase I/II trial of breast cancer survivors, Grifola frondosa extract acted as an immunomodulator by increasing the production of IL-2, IL-10, TNF-α, and IFN-γ by subsets of T cells [14]. Hackman et al., reported that Lentinula endodes treatment alone was ineffective in treating prostate cancer patients [15]. The anticancer activity of a semisynthetic derivative of illudin S from Omphalotus illudens is due to the alkylation of DNA, RNA, and proteins. However, its use in the clinic is limited due to its strong retinal toxicity and narrow therapeutic index [16]. In a phase I trial conducted by Torkelson, Trametes versicolor enhanced the immune status in immunocompromised breast cancer patients [17]. In an RCT performed by Chay et al. for advanced hepatocellular carcinoma (HCC), patients treated with Trametes versicolor had a longer median overall survival (OS) and median progression-free survival compared to the placebo group [18]. The immunostimulatory effect and direct toxicity to cancer cells exhibited by Trametes versicolor polysaccharides implies that they can be applied as more than an adjuvant therapy [19].
Anticancer Compounds from Medicinal Mushrooms
The bioactive compounds found in mushrooms include polysaccharides, proteins, fats, phenolics, alkaloids, ergosterol, selenium, folate, enzymes, and organic acids. The anticancer components in mushrooms are antroquinonol, cordycepin, hispolon, lectin, krestin, polysaccharide, sulfated polysaccharide, lentinan, and Maitake D Fraction [20]. Many mushrooms are currently under clinical trials, and only a few are available for clinical use [2]. Polysaccharides are the most potent mushroom compounds with antitumor and immunomodulatory properties. Among polysaccharides, β-glucan consists of a backbone of glucose residues linked by β-(1→3)-glycosidic bonds, frequently with attached side-chain glucose residues joined by β-(1→6) linkages [21]. β-Glucan stimulates the immune system as a non-self molecule by inducing the production of cytokines that activate phagocytes and leukocytes [20]. Lentinan and lectins from Lentinula edodes have shown cytotoxic effects on breast cancer cells [22]. Lentinan from Lentinula edodes (also called Pyogo in Korea), schizophyllan (also called SPG, sonifilan, sizofiran, and sizofilan) from Schizophyllum commune, and PSK (also called krestin) from Trametes versicolor have been approved as prescription anticancer drugs in Japan [23]. Mushroom polysaccharides stimulate natural killer cells, T cells, B cells, and macrophages, leading to an increased immune response [24]. Cordycepin, also known as 3-deoxyadenosine, is a major anticancer compound in Cordyceps species. It exerts an apoptotic effect via dysregulated polyadenylation, and causes the termination of DNA or RNA elongation by binding to the site where nucleic acids are to be bound [25]. Hispolon, an active polyphenol compound, has been reported to exert potent antineoplastic properties and enhance the cytotoxicity of chemotherapeutic agents [26]. These findings suggest that some mushrooms may act synergistically in combination with commercial anticancer drugs as effective tools for treating drug-resistant cancers [6].
Anticancer Mechanisms of Medicinal Mushrooms
In Asian countries, medicinal use of mushrooms has been prevalent for a long time; however, in recent decades, their use for treating a number of diseases, including cancers, has increased in other parts of the world. The tremendous therapeutic potential of edible and medicinal mushrooms is attributed to the bioactive substances present in mushrooms. To increase the therapeutic success rates against cancer, it is important to understand the molecular mechanisms underlying cancer development and progression and the molecular targets of mushroom-derived bioactive compounds. In this section, we discuss how mushrooms help overcome multidrug resistance (MDR) and target signaling pathways, such as PI3K/AKT, Wnt-CTNNB1, and MAPK, during cancer treatment (Table 2). Antrodia camphorata grown on germinated brown rice (CBR) [30] Colon cancer β-catenin pathway↓ Cantharellus cibarius [31] Drug resistance in Pgp-expressing tumor cells↓ Cordyceps militaris [32] Cordycepin NRK-52E cell line NF-κB↓ Ganoderma lucidum [33] PD-1 protein↓
Overcoming Pgp-Mediated MDR Using Mushrooms
Drug resistance is a major obstacle in chemotherapy. Endogenous or acquired drug resistance represents the simultaneous development of resistance in tumor cells to drugs that are not mechanically or structurally related. This phenomenon is known as multidrug resistance (MDR). Resistance to anticancer drugs is one of the main factors of treatment failure, resulting in high morbidity. The overexpression of efflux pumps, which leads to multidrug resistance, is an important issue that needs to be resolved.
A major form of MDR is the overexpression of ATP-binding cassette (ABC) transporters and P-glycoprotein (Pgp), a 170 KDa transmembrane glycoprotein product encoded by the MDR1 gene. The mechanism of drug resistance in Pgp-expressing tumor cells involves an increase in the extracellular transport of various chemotherapeutic agents, which diminishes cellular accumulation, and thus, decreases drug efficacy [31,44,45]. Anticancer agents with Pgp-mediated MDR include paclitaxel (TAX), doxorubicin (DOX), actinomycin D, vinblastine, and etoposide, whereas Pgp does not affect the cytotoxicity of certain other anticancer drugs, such as 5-fluorouracil, cisplatin, and carboplatin [46]. To confirm the drug resistance reversal activity of Basidiomycete mushroom extracts collected in Korea, the cytotoxic activity of paclitaxel (TAX), a well-known Pgp-related anticancer drug, on Pgp-positive and -negative human cancer cells in the presence or absence of the tested mushroom extract was compared to that in the presence of verapamil (VER), a well-known MDR reversal agent. Cantharellus cibarius (M02) and Russula emetica (M12) increased the cytotoxic activity of TAX by blocking Pgp-mediated drug efflux in Pgp-positive HCT15 and MES-SA/dX5 cancer cells, but not in Pgp-negative A549 and MES-SA cancer cells. Cantharellus cibarius and Russula emetica also increased the cytotoxicity of doxorubicin, another Pgp-associated anticancer drug, against MES-SA/DX5 cells [31]. Ganoderma species induced apoptosis in drug-sensitive (H69) and multidrug-resistant (VPA) human small-cell lung cancer (SCLC) cells that were resistant to etoposide and doxorubicin [47]. Ganoderma lucidum polysaccharide (PLP) inhibits the constitutive activation of NF-κB, decreasing the expression of PGP in cancer cells [48]. Zhankuic acids A-C isolated from Taiwanofungus camphoratus were found to exert inhibitory effects against Pgp, which reversed drug resistance against doxorubicin, vincritine, and paclitaxel in human cancer cells [43] (Figure 1). to etoposide and doxorubicin [47]. Ganoderma lucidum polysaccharide (PLP) inhibits the constitutive activation of NF-κB, decreasing the expression of PGP in cancer cells [48]. Zhankuic acids A-C isolated from Taiwanofungus camphoratus were found to exert inhibitory effects against Pgp, which reversed drug resistance against doxorubicin, vincritine, and paclitaxel in human cancer cells [43] (Figure 1). In recent years, immune checkpoint blockade (ICB) therapy has caused a paradigm shift in cancer immunotherapy; it primarily inhibits various checkpoints that control host T cell activity by regulating the immune checkpoint interactions, PD-1/PD-L1 and CTLA-
4.2.
Overcoming Tumor Resistance to Inhibit Immune Checkpoint Interactions, the PD-1 Pathway, and CTLA-4/CD80, Using Mushrooms In recent years, immune checkpoint blockade (ICB) therapy has caused a paradigm shift in cancer immunotherapy; it primarily inhibits various checkpoints that control host T cell activity by regulating the immune checkpoint interactions, PD-1/PD-L1 and CTLA-4/CD80. Programmed cell death-1 (PD-1) (CD279) is an inhibitory receptor, which is expressed in activated CD8+ T cells (as well as on B cells and natural killer cells) and leads to reduced innate and adaptive immune responses [49]. Particularly, PD-1 is highly expressed in tumor-specific T cells, which has prompted researchers to examine whether the inhibition of PD-1 suppresses cancer aggression by promoting an effective immune response [50,51]. The binding of PD-1 to the PD-L1 ligand allows cancer cells to evade the host immune response. In addition, PD-L1 induction protects cancer cells from Tcell-mediated destruction [52]. PD-1/PD-L1-checkpoint-blocking antibodies have been focused on as a powerful ICB therapy for cancer patients. However, patients with cancer quickly develop resistance to immunotherapy. β-Glucan from medicinal mushrooms, which acts as an immune adjuvant, has been found to stimulate innate and adaptive immune responses. It has been reported that administration of whole glucan particle (WGP) β-glucan along with PD-1/PD-L1-checkpoint-blocking antibodies leads to increased recruitment of immune-associated cells, improves the regulation of the balance between T cell activation and immune tolerance, and delays tumor progression [53]. This combination therapy was also found to improve progression-free survival in patients with advanced cancer who had previously discontinued anti-PD-1/PD-L1 therapy because of disease progression. These findings suggest that β-glucan can be used as an immune adjuvant to reverse anti-PD-1/PD-L1 resistance by regulating the immune system [54]. Ganoderma lucidum and its bioactive compounds reduced PD-1 protein level in cultured GM00130 and GM02248 human B-lymphocytes, thus, preventing and treating cancer [33] (Figure 2 disease progression. These findings suggest that β-glucan can be used as an immune adjuvant to reverse anti-PD-1/PD-L1 resistance by regulating the immune system [54]. Ganoderma lucidum and its bioactive compounds reduced PD-1 protein level in cultured GM00130 and GM02248 human B-lymphocytes, thus, preventing and treating cancer [33] ( Figure 2). [55]. Inonotus obliquus blocks CTLA-4/CD80 interaction and increases T cell activity so that cancer cells cannot escape the immune response [36] (Figure 2).
Targeting the PI3K/AKT Signaling Pathway in Cancer Using Mushrooms
The PI3K/AKT pathway is involved in the acquisition of chemotherapeutic drug resistance. The activation of phosphatidylinositol 3-kinase (PI3K) signaling leads to VEGF production, reduces tumor CD8+ T cell infiltration, and induces subsequent resistance to PD-1 blockade therapy. Currently, to avoid primary resistance to the PD-1/PD-L1 blockade, clinical treatment regimens that combine kinase inhibitor therapy with an immune checkpoint blockade are in use to enhance the response rates [52]. Activated phosphoinositide 3-kinase (PI3K) is a key signaling molecule that affects cell survival, proliferation, and differentiation by triggering the sequential activation of AKT and other downstream pathways [2,56]. When many components of this pathway are altered, it leads to the development of various cancers in humans [2].
Several research groups have demonstrated that mushroom-derived compounds can exert antitumor and antimetastatic effects by affecting various molecules in the PI3K/AKT The binding between CD80 on antigen-presenting cells (APCs) and CD28 on naive T cells results in T cell activation in the lymph nodes, which elevates the immune response and kills cancer cells. However, the interaction of CTLA-4 on naive T cells with CD80 on cancer cells produces an inhibitory signal for T cell activation, leading to the inhibition of T cell activation and suppression of the immune response [55]. Inonotus obliquus blocks CTLA-4/CD80 interaction and increases T cell activity so that cancer cells cannot escape the immune response [36] (Figure 2).
Targeting the PI3K/AKT Signaling Pathway in Cancer Using Mushrooms
The PI3K/AKT pathway is involved in the acquisition of chemotherapeutic drug resistance. The activation of phosphatidylinositol 3-kinase (PI3K) signaling leads to VEGF production, reduces tumor CD8+ T cell infiltration, and induces subsequent resistance to PD-1 blockade therapy. Currently, to avoid primary resistance to the PD-1/PD-L1 blockade, clinical treatment regimens that combine kinase inhibitor therapy with an immune checkpoint blockade are in use to enhance the response rates [52]. Activated phosphoinositide 3-kinase (PI3K) is a key signaling molecule that affects cell survival, proliferation, and differentiation by triggering the sequential activation of AKT and other downstream pathways [2,56]. When many components of this pathway are altered, it leads to the development of various cancers in humans [2].
Several research groups have demonstrated that mushroom-derived compounds can exert antitumor and antimetastatic effects by affecting various molecules in the PI3K/AKT pathway. For example, hispolon derived from Phellinus linteus inhibited the invasion and motility of a highly metastatic liver cancer cell line (SK-Hep1) by downregulating MMP2, MMP9, and uPa, and inhibiting the activation of the ERK1/2, PI3K/AKT, and FAK pathways [38]. Proteoglycan (P1) from Phellinus linteus showed antiproliferative activity in multiple human cancer cells by inducing a notable decrease in AKT, Reg IV, EGFR, and plasma PGE2 concentrations [57]. A polysaccharide-protein complex isolated from Pleurotus pilmonarius (PP) suppressed PI3K/AKT signaling in liver cancer cells [2]. Additionally, when PP was used in combination with cisplatin, the sensitization of liver cancer cells to cisplatin was improved. The phosphorylation of BAD at Ser 136 via AKT is required for cell viability. When this AKT node is suppressed in ovarian cancer cells, they become sensitive to cisplatin. Inhibition of PI3K/AKT signaling by PP made the cells more sensitive to cisplatin [2]. Ganoderic acid from Ganoderma lucidum suppressed human glioblastoma by inducing apoptosis and autophagy via the inactivation of the PI3K/AKT signaling pathway [58].
Targeting the Wnt/β-Catenin Pathway in Cancer Using Mushrooms
A high rate of abnormality in the Wnt signaling pathway has been observed in many cancers. APC, CTNNB1, AXIN1, FAM123B, and TCF7L2 are the key molecules in Wnt signaling pathway that may undergo somatic mutations related to common human cancers, including colon cancer [59]. The onset and progression of sporadic colon cancer (CRC) and familial adenomatous polyposis (FAP)-associated diseases are believed to be caused by mutations in the adenomatous polyposis coli (APC) gene [2]. Depending on the stage and type of cancer, the Wnt-CTNNB1 signaling pathway can either promote or inhibit tumor initiation, growth, metastasis, and drug resistance [2]. An inverse correlation has been reported between β-catenin/Wnt activation in cancer and the degree of CD8+ T cell infiltration in a mouse model of melanoma [60]. Increased β-catenin/Wnt activity also correlated with diminished CD103+ dendritic cell infiltration due to reduced levels of CCL4, a chemokine responsible for attracting them. PD-1 blockade therapy was ineffective in melanoma tumors with β-catenin/Wnt activation, whereas this treatment worked well in tumors without β-catenin/Wnt mutations [52,61]. Wnt inhibition increased tumor T cell infiltration and inhibited tumor proliferation and migration by enhancing PD-1 antibody treatment and upregulating the expression of PD-L1 in mice with glioblastoma (GBM) [62].
Several groups have reported the anti-oncogenic activities of different mushroomderived compounds via Wnt-CTNNB1 signaling. For instance, 4-acetylantroquinonol and antroquinonol from Antrodia camphorata was discovered to inhibit colon cancer by suppressing the Wnt/β-catenin pathway [28,29]. Antrodia camphorata grown on germinated brown rice suppressed human colon cancer cell proliferation by upregulating G0/G1 phase arrest and apoptosis and reducing β-catenin signaling [40]. Researchers have shown that Phellinus linteus can inhibit tumor growth, invasion, and angiogenesis by downregulating genes (cyclin D1 and TCF/LEF) of the Wnt signaling pathway in SW480 human colon cancer cells as well as in vivo. Phellinus linteus grown on germinated brown rice attenuated the levels of NF-κB, β-catenin, and mitogen-activated protein kinase (MAPK) proteins [41]. Ergosterol peroxide and 4-acetylantroquinonol from Inonotus obliquus inhibited nuclear β-catenin in colon cancer cells [2]. When the level of β-catenin activity was reduced, the expression of β-catenin target genes (c-myc, cyclin D1, and VEGF) was also decreased, thus exerting anticancer effects on meningioma cells [63]. Therefore, these compounds may be potential candidates for pharmaceutical treatment of human meningiomas.
Targeting the MAPK Pathway in Cancer Using Mushrooms
Other mutations associated with T cell exclusion and subsequent resistance to PD-1/PD-L1 blockade usually occur within the mitogen-activated protein kinase (MAPK) signaling cascade. Constitutive oncogenic signaling activated through this pathway leads to the production of immunosuppressive cytokines, viz., vascular endothelial growth factor (VEGF) and interleukin 8 (IL-8), which inhibit T cell recruitment to the cancer tissue as well as their activity [52]. Mutations within the MAPK cascade are common in melanomas, and inhibition of this cascade is known to improve CD8+ T cell infiltration within cancers and sensitize them to PD-1 blockade therapy [52]. This result strongly suggests that a combination therapy involving multikinase inhibition with PD-1 blockade can be used in cancers with such mutations.
Platinum-based anticancer drugs have been shown to upregulate PD-L1 expression through the MAPK pathway in gastric cancer cells. A β-glucan from Lentinula edodes, viz., lentinan, suppressed cisplatin-or oxaliplatin-induced PD-L1 expression, suggesting that a combination of lentinan and platinum-based chemotherapy can recover the chemosensitivity of cells [64]. In addition, the MAPK pathway plays a pivotal role in oncogenesis, as several oncoproteins upstream of MAPK cascade, including ErbB-2, Scr RTKs, Ras, and Raf, are mutated into activated forms of enzymes [65]. The triterpene-enriched fraction, WEES-G6, from Ganoderma lucidum inhibited Huh-7 human hepatoma cell growth [66,67]. Yang et al. showed that Antrodia camphorata markedly inhibited the MAPK signaling pathway, thereby suppressing the invasion/migration of highly metastatic MDA-MB-231 cells [26,68]. Furthermore, G. lucidum triterpene extract (GLT) suppressed the phosphorylation of p38 MAPK, leading to antophagy in colon cancer cells [69]. Co-treatment using the extract of Phellinus linteus grown on germinated brown rice (PBR) and cetuximab reduced MAPK signaling by decreasing KRAS expression. PBR enhanced the sensitivity of KRAS-mutated colon cancer cells to cetuximab [40].
Targeting the NF-κB Pathway in Cancer Using Mushrooms
The nuclear transcription factor κB (NF-κB) is one of the factors responsible for cellular chemoresistance, and controls a myriad of gene expressions, including antigen receptors on immune cells, adhesion molecules, proinflammatory cytokines, and chemoattractants for inflammatory cells [70]. NF-κB is associated with neoplastic development, including insensitivity to growth inhibitory signals, avoidance of apoptosis, metastasis and sustained angiogenesis, and chronic inflammation [71].
Numerous studies highlight the antitumor effect of mushrooms through targeting the NF-κB signaling pathway. Antroquinonol (AQ) and 4-acetylantroquinonol B (4-AAQB) from Antrodia Camphorata exhibited inhibitory effects on NF-κB signaling in MCF-7 breast cancer cells [72]. Kadomatsu et al. found that treatment with cordycepin, a major compound of Cordyceps militaris, became sensitive to TNF-α-mediated apoptosis, which suppresses pro-survival NF-κB. Ho's group reported that Ganoderma suppressed metastasis in highly invasive breast and prostate cancer cells by blocking constitutively active AP-1 and NF-κB signaling [73]. Treatment with sulfated polysaccharide obtained from Grifola frondosa (S-GFB) resulted in apoptosis of HepG2 cells through the induction of S phase arrest, inhibiting notch1 expression, degradation of IκB-α, translocation of NF-κB from the cytoplasm to the nucleus, and the activation of caspase 3 and 8 [35]. Phellinus linteus was shown to produce caffeic acid phenethyl ester (CAPE), which specifically inhibits NF-κB binding to DNA [74,75].
Regulating Immune Function in Cancer Using Medicinal Mushrooms
Despite the increasing success of existing personalized cancer treatments, recurrence and metastasis are common, depending on the type and stage of the disease [11]. Many studies have reported the beneficial effects of medicinal mushrooms, which particularly enhance the quality of life and reduce the side effects of conventional chemotherapy. In addition, their positive effects on anticancer activity and immune regulation have been reported. Several mechanisms have been suggested for the antiproliferative and immunomodulatory effects of medicinal mushrooms [2,20,76,77]. Mushroom polysaccharides stimulate dormant natural killer cells, T cells, B cells, and macrophage-dependent immune responses [24].
Mushroom-derived compounds activate immune cells to induce either cell-mediated or direct cytotoxicity in cancer cells by binding to pathogen recognition receptors. Compounds, such as lentinan, increase the proliferation of cytotoxic T lymphocytes and macrophages and induce nonspecific immune responses [78]. Pleurotus tuber and Pleurotus rhinoceros extracts were shown to promote the activation of lymphocytes and NK cells and increase macrophage proliferation, T helper cell number, and CD4/CD8 ratio and population, conferring anticancer effects [9,11,20,79]. Natural killer cells act as the key cells in innate immunity by attacking major histocompatibility class I-negative target cells that can evade immune surveillance of cytotoxic T cells. The activity of natural killer cells in patients with gynecological cancer undergoing chemotherapy was significantly enhanced when co-treated with Agaricus blazei Murill for 3 to 6 weeks as compared to that in the placebo group [9]. Leukopenia results in cachexia and metabolic changes in cancer patients and increases the risk of infection [80]. In patients with multiple myeloma treated with Agaricus blazei Murill, the immune status was much better in terms of maintaining the population of white blood cells and immunoglobins and led to fewer infections [81]. The mushroom's main component, β-glucan, exerts hematopoietic effects and increases bone marrow regeneration in vitro [82]. β-Glucan also significantly increased the population of DCs (CD11c+/CD8+) and macrophages (CD11b+/F4-80+) and decreased the population of regulatory T cells and myeloid-derived suppressor cells (MDSC)s, resulting in an enhanced immune response [54]. Ganoderma lucidum supplementation resulted in a more stable disease state in the lung cancer study population than in the control group [83]. In addition, there was a significant increase in CD3 percentage, natural killer cell activity, and lymphocyte mitogenic reactivity against concanavalin A in lung cancer patients [11]. Cordyceps militaris fermented with Pediococcus pentosaceus (GRC-ON89A) treatment was reported to aid the recovery of immune activity in high-dose cyclophosphamide (a chemotherapeutic drug)-treated mice by increasing the phagocytic activity of mouse peritoneal macrophages and stimulating NO production in macrophages. GRC-ON89A reduced the toxicity of anticancer agents through the recovery of the immune system [84].
Thus, mushroom-derived compounds induce innate and adaptive immunity by enhancing immune surveillance against cancer by affecting monocytes, macrophages, NK cells, and B cells, and by activating immune organs [85,86], which leads to cancer cell apoptosis, cell cycle arrest, and prevention of angiogenesis and metastasis [20]. Consumption of mushroom compounds also boosts the secretion of antitumor cytokines by CTLs and activation of immune organs, thereby eliminating cancer cells and strengthening the weakened immune system [87].
Prebiotic Properties of Medicinal Mushrooms in Cancer
Several groups have reported that medicinal mushrooms can act as prebiotics, and thus enhance the growth of beneficial microbiota. Prebiotics can affect the human intestinal microbial population and suppress various diseases such as diabetes, obesity, and cancer [77]. The important sources of prebiotics in mushrooms are polysaccharides, such as chitin, hemicellulose, βand α-glucans, mannans, xylans, and galactans, which can suppress the proliferation of pathogens by increasing the growth of probiotics in the gut [6,88]. A poor intestinal microbiota composition can lead to the development of cancer [88]. It is possible that prebiotic effects medicinal mushrooms could enhance quality of life (QOL) during and after cancer therapy.
Conclusions
This review demonstrates the potential use of mushrooms and their anticancer mechanisms in cancer treatment. Mushroom-derived bioactive compounds activate and/or regulate the immune system by affecting the maturation, differentiation, and proliferation of immune cells, thereby inhibiting cancer cell metastasis and growth. It is very important to understand the underlying mechanisms of action of the anticancer compounds derived from mushrooms to suppress cancer and improve the quality of life of cancer patients. Mushrooms show anticancer potential by regulating a single molecule of a specific signaling pathway, or by having multiple targets in the same or different signaling pathway(s), including the PI3K/Akt, Wnt/β-catenin, and MAPK pathways. In addition, several studies have highlighted the effect of mushroom-derived components as single and adjuvant therapeutic agents in reversing MDR by targeting Pgp, PD-1/PD-L1, and CTLA-4/CD80 interactions. In addition, the prebiotic effects of medicinal mushrooms could enhance quality of life (QOL) during and after cancer therapy by recovering the intestinal microbiota.
However, only a few clinical studies of a small number of mushrooms demonstrate the positive effects of medicinal mushrooms, including reductions in the adverse effects of conventional therapies, as well as antitumor activity and immunomodulation. Therefore, more clinical research on mushrooms with anticancer potential needs to be conducted, especially by employing high-quality methodology, larger sample sizes, standard mushroom preparations, and long-term follow-ups. In addition, future studies should investigate the preventive aspects of medicinal mushrooms in reducing the rate of cancer occurrence by being a part of a healthy diet and lifestyle. High-quality clinical studies are needed to identify the potential of medicinal mushrooms in cancer treatment. | 6,329.4 | 2022-09-01T00:00:00.000 | [
"Biology"
] |
Morning quiet-time ionospheric current reversal at mid to high latitudes
Abstract. The records of an array of magnetometers set up across the Australian mainland are examined. In addition to a well-defined current whorl corresponding to the ionospheric Sq current system, another system of eastward flowing currents is often found in the early morning. The system is most easily identified at observatories poleward of the focus of the Sq system, where a morning reversal from eastward to westward currents can be seen. The time of the reversal is usually later, sometimes up to 12h local noon, in June (Southern Winter) than in other seasons. There is some evidence of a similar current system at other longitudes and in the Northern Hemisphere. An important outcome of the study is that it enables identification of which features of a daily variation of the northward magnetic field ΔX relate to an Sq current whorl and which must be attributed to some other current system.
Introduction
When we look at magnetic observatory records, we see variations of the magnetic field, including the Earth's main field, but it is often hard to identify the time of reversal of the ionospheric currents which are responsible for the daily variations.This problem has plagued researchers since early times.It was particularly difficult at low latitudes and at the magnetic equator because there is often just one peak in the daily variation of the horizontal component of the magnetic field, H , with time.We had to wait until another method of determination of the direction of current flow was developed.This arrived when Balsley (1969) measured electron drift velocities in the equatorial electrojet.He found that the current reversed direction within one hour of 06:30 LT.F -region vertical drift velocities reversed at the same time (Morriss and Lyon, 1966).Since the equatorial electrojet is Correspondence to: R. Stening (r.stening@unsw.edu.au)primarily driven by the electric field responsible for vertical F -region drifts, the reversal time of these drifts should also give a close approximation to the time of reversal of the electrojet current.Fejer et al. (1991) have since provided information on seasonal variations of the reversal time in the Peruvian electrojet.Knowledge of this reversal time placed an additional constraint on the modelling of the S q current system (Stening, 1970).
Recently, an examination was made of records from an array of more than fifty magnetic observatories spread across the Australian mainland.Details of this experiment may be found in Chamalaun and Barton (1993).The experiment is called AWAGS (Australia-Wide Array of Geomagnetic Stations).Often the S q current whorl was clearly seen in plots of the results from the magnetometer array, but observatories south of the focus, which would record a westward current as the whorl passed across them, showed an eastward current earlier in the day.This prompted an investigation into the daily variations of H at observatories poleward of the S q focus.The times of reversal of the current at these latitudes will be another parameter which any modelling of the S q current will be required to fit.
Examination of data
For each observatory in the Australian mainland experiment, the one-minute data are averaged into an hourly mean.The hourly mean values have a linear trend removed and midnight values are subtracted to give a daily variation.The magnitude and direction of the horizontal component of the magnetic field are then determined and this vector is rotated clockwise through 90 • to represent an equivalent ionospheric current flow.Notice that the assumption that the ionospheric current flow is zero at midnight may not be exactly true on all occasions.Table 1 gives a list of the observatories referred to, their codes and their coordinates.
Figure 1 shows the changes in the overhead current system with time on 21 June 1990, when the K p value for the day was 10 + .A rough indication of the magnetic field deviation corresponding to the length of the arrows is given.We note a significant eastward current flow early in the morning before the S q whorl moves in from the east.The eastward current remains until 02:00 UT (10:00 LT) at ABY (Albany) in south-western Australia.On this day the focus of the S q current system appears to be near 30 • S latitude at 02:00 UT but then moves up to north of 25 • S at 4 h.It is as if, at the earlier times, the eastern current system is strong enough to push the focus southward (giving a positive deviation to observatories around 25 • S), but later, at 04:00 UT, the S q current system is able to dominate.An examination of the magnetic field variations at four "corners" of the continent is instructive both in trying to find the true zero of the ionospheric current and in relating the observed variations to the overall current flow.In these diagrams (Fig. 2) zero is fixed as a running mean of midnight values over ten days and so the zero may be slightly different from that in the vector plots.
Looking at Albany in the southwest, the morning peak in the northward magnetic field, X, is evidence of the eastward current flow.The main westward flow does not start until after 10:00 LT.This and other features are related to the late arrival of the S q system on this day -the focus sits near 13:00-14:00 LT (see Fig. 1).At Toolangi (TOO) in the southeast the current turns from east to west even later, around 12:00 LT.The positive deviations at the northern stations, CKT and DER, last longer than in the south: in the early afternoon the S q current whorl contributes to the positive deviation in X at the northern stations but to the negative deviation in the southern parts.In the west the negative deviation in Y is larger than its later positive deviation.This is probably because, by the time the S q system reaches Albany, it is getting quite late in the afternoon (15:00-16:00 LT) and the ionospheric conductivity will have fallen.It is also worth noticing that some of the Y variations have an "M shape", indicating a large northward current near noon and smaller southward currents in the morning and afternoon.This differs from the "classical" Southern Hemisphere Y variation which has a morning to noon negative excursion and an afternoon positive excursion.The additional morning positive excursion, indicating southward currents, may possibly be related to the morning eastward currents.The morning maximum is especially clear at CKT where the minimum is almost missing.Again, the afternoon positive excursion is relatively weak because the ionospheric conductivity has fallen by the time it arrives.At lower latitudes such as CKT the extra early morning positive peak has sometimes been seen as part of an "invasion" of the Northern Hemisphere current system as described by Mayaud (1976).However, the authors suspect that this low-latitude effect will eventually be recognised as being caused by field-aligned currents.
Figure 1 shows that the morning eastward current system does not appear to be part of some current whorl, at least not of the same form as the usual S q whorl.
Overall in June−July 1990 an eastward current was usually observed during the local time period from 6 to around 11 h.On 2 June 1990, it can be seen in Fig. 3 that the current was still eastward at 12 noon, 135 • E local time (see EMU and MTD which reverse to westward an hour later).K p was 13−.This example points out the considerable strength of the early morning current system as it holds up the arrival of the S q whorl until noon.
By contrast in December, March and April the early morning eastward current often does not extend beyond 08:00 LT.An example of this is shown in Fig. 4 for 3 December 1989, where the currents have started to turn westward at 08:00 LT.Later in this day the focus moves in at about 23 • S latitude.
Figure 5 from 3 April 1990 is another good example showing the two current systems "meeting each other".In the east, where it is 09:00 LT, the S q current whorl can be seen approaching, while in the west, where the local time is 7 h, the eastward current system is dominant.This picture is fairly typical of April, where the transition from eastward to westward currents occurs around 08:00 LT in central south Australia.
It is interesting to look at a series of ten days for the X component in December 1989, as shown in Fig. 6.Three of the observatories CTA, NEW and TOO, are on the eastern side of Australia while the other two, LRM and ABY, are in the west.In several cases an "M" shape can be seen in the daily variation of X, particularly at Newcastle (NEW) and Toolangi (TOO).The morning maximum is part of the eastward current system under examination in this paper, the minimum is part of the S q current whorl and the afternoon maximum appears to be part of some other current system.The current vectors in Fig. 7 show that this is so.On some days Albany (ABY) has a clear positive deviation while Toolangi (TOO) is negative in the middle of the day (e.g. 8 Fig. 1.Equivalent current systems over Australia on 21 June 1990 at 00:00, 02:00, 03:00 and 04:00 UT (09:00 to 13:00 LT at 135 • longitude).An arrow length equivalent to 5 • of longitude westward corresponds to a magnetic field (rotated 90 • clockwise) of 27, 25, 21 and 24 nT, respectively, 00:00, 02:00, 03:00 and 04:00 UT and 9 December).On these days the focus is south of Albany but north of Toolangi.This indicates that the latitude of the S q focus has moved southward as it moves westward over the continent (see Stening, 1991).The next question is whether this phenomenon is unique to the Australian region.
Figure 8 shows a sample of X (or H ) variations from several observatories at different locations around the world for the same period in December 1989 as for Fig. 6.Four of these stations are in the Northern Hemisphere, where it is winter, and again we can see signs of early morning eastward currents, as evidenced by early morning positive deviations.Since a similar network to that provided by AWAGS is not available elsewhere, we cannot definitely confirm that a sim-ilar current system is in place at other longitudes, but the indications are that it might be so.
At Port Alfred in the Crozet Islands (CZT) in the southern Indian Ocean the H variation looks somewhat similar to that at Albany on 9 December.However, the morning maximum at CZT is at around 06:00-07:00 LT and the minimum is at 12:00-13:00 LT.The Albany maximum on 9 December is at 12:00 LT.So it appears that the correct interpretation is that the morning maximum at CZT is part of the morning eastward current system, which is under discussion here, while the noon minimum is related to the S q whorl whose focus is north of CZT.The amplitude of the morning maximum at CZT is around 40 nT.This is similar to the amplitudes at the Australian observatories during this season.A similar variation is seen (but not shown here) at Port-aux-Francais (Kerguelen) (70.2 E, 49.3 S) which is at geomagnetic latitude 57.3 S.
Discussion
It is hard to find traces of these morning eastward currents in earlier work.Takeda (1999) has performed spherical harmonic analyses of magnetic data and reconstructed equivalent currents.He finds predominantly north-south currents near dawn in the Southern Hemisphere in March and Decem- ber.In June his currents in Northern Australia appear as an "invasion" of the Northern Hemisphere current system, but there is little evidence of the morning eastward currents we are discussing.Some discussion of what happens to ionospheric current systems as they cross Australia was given by Stening and Hopgood (1991).The early morning peaks in X are clearly visible in the data presented there but the limited data accessed by them at that time could not provide the insights which the AWAGS array has now given.
We should now ask what is the source of these early morning currents.Are they of magnetospheric or ionospheric origin?Are they a residual "disturbance effect" seen even on very quiet days or are they a component of the ionospheric dynamo driven by tidal winds in the ionosphere?Mayaud (1976) examined data from Alibag and San Juan on very quiet days, seeking the signature of the quiet time magnetospheric currents predicted by Olson (1970a, b).He found that the predicted effect was practically indiscernible.
Another candidate might be the currents associated with S q p , an "equivalent" electric current which flows in the polar cap region, but which, on occasion, may extend into lower latitudes.Iijima and Kokubun (1973) investigated S q p on a very quiet day and concluded that its effects did not extend to magnetic latitudes lower than 70 • on this occasion.In any case the expected current flow direction in the early morning is westward at lower latitudes and so cannot explain the present observations.It should be noted that, when there is an "M" shape variation of X near the focus, at least in the Australian region, the central (nearest midday) deviation is that corresponding to the S q whorl.The morning and afternoon deviations do not appear to derive from a whorl-like current distribution, at least not one with a focus in the 10 • −40 • latitude range over Australia.One might have imagined that secondary current whorls were present, as are sometimes seen in representations of the semidiurnal lunar tide (e.g.Matsushita, 1969) but this does not seem to be so.We are left with the possibility that we are seeing the reversal of currents associated with the dynamo process and so any modelling of the S q system will need to reproduce this feature.However, the lack of any whorl-like structure to the early morning currents casts some doubt on this idea.Le Sager and Huang ( 2002) have suggested that field-aligned currents may make an important contribution near dawn but their model does not clearly demonstrate the effect presented here.Yamashita and Iyemori (2002) have presented evidence for field-aligned currents, using data from the Ørsted satellite.However, they show that the current direction reverses from summer to winter.This may explain effects in Y but the morning eastward current emphasised here does not change direction with season.
Conclusions
1.With an array of magnetic observatories it is possible to identify which parts of the daily variation curve in X or H correspond to the S q current whorl.
2. An eastward flowing current system is often seen in the morning which has no whorl-like structure.
3. In Australia the reversal times from eastward to westward currents are generally later in June than in other seasons.
4. There are signs of similar morning currents at other longitudes and in the Northern Hemisphere.
5. The source of these morning eastward currents has not been definitely determined, but it does not appear to be part of the S q p system.
Fig. 6 .
Fig. 6.A time series of the X component hourly values from 6 to 14 December 1989.CTA (Charters Towers), NEW (Newcastle) and TOO (Toolangi) are in the east while LRM (Learmonth) and ABY (Albany) are in the west.
Fig. 8 .
Fig. 8.A time series of X component hourly values from 6 to 15 December 1989 from a range of observatories: NGK (Niemegk) in Eastern Germany, CLF (Chambon la Forêt) in France, TUC (Tucson) in Southern USA, MMB (Memambetsu) in Northern Japan and CZT (Port Alfred) in the Southern Indian Ocean.
Table 1 .
Coordinates of observatories used. | 3,609.2 | 2005-02-28T00:00:00.000 | [
"Physics"
] |
On the connections of the generalized entropies and Kolmogorov-Sinai entropies
We consider the concept of generalized measure-theoretic entropy, where instead of the Shannon entropy function we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function we show that this isomorphism invariant is linearly dependent on the Kolmogorov-Sinai entropy.
Introduction
Dynamical and measure-theoretic (called also Kolmogorov-Sinai entropy) entropies are basic tools for investigating dynamical systems (see e.g. [5,9]). They were extensively studied and successfully applied among others in statistical physics and quantum information. It appeared to be an exceptionally powerful tool for exploring nonlinear systems. One of the biggest advantages of the Kolmogorov-Sinai entropies lies in the fact that it makes possible to distinguish the formally regular systems (those with the measure-theoretic entropy equal to zero) from the chaotic ones (with positive entropy, which implies positivity of topological entropy [10]).
The Kolmogorov-Sinai entropy of a given transformation T acting on a probability space (X, Σ, µ) is defined as the supremum over all finite measurable partitions P of the dynamical entropy of T with respect to P, denoted by h(T, P). As a dynamical counterpart of Shannon entropy, the entropy of transformation T with respect to a given partition P is defined as the limit of the sequence 1 n H(P n ) with η being the Shannon function given by η(x) = −x log x for x > 0 with η(0) := 0 and P n is the join partition of partitions T −i P for i = 0, ..., n − 1.
The existence of the limit in the definition of the dynamical entropy follows from the subadditivity of η. The most common interpretation of this quantity is the average (over time and the phase space) one-step gain of information about the initial state. Taking supremum over all finite partitions we obtain an isomorphism invariant which measures the rate of producing randomness (chaos) by the system. Since Shannon's seminal paper [18] many generalizations of the concept of Shannon static entropy were considered, see Arimoto [1], Rényi [16] and Csiszár's survey article [4]. The dynamical and measure-theoretic counterparts were considered by few authors. De Paly [14] proposed generalized dynamical entropies based on the concept of the relative static entropies. Unfortunately it appeared that, despite some special cases [14,15] the explicit calculations of this invariant may not be possible. Grassberger and Procaccia proposed in [6] a dynamical counetrpart of the well-known generalization of Shannon entropy -the Rényi entropy, and its measure-theoretic counterpart were considered by Takens and Verbitski. They showed that for ergodic transformations with positive measure-theoretic entropy, Rényi entropies of a measure-theoretic transformation are either infinite or equal to the measure-theoretic entropy [20]. The answer for non-ergodic aperiodic transformations is different, for Rényi entropies of order α > 1 they are equal to the essential infimum of the measure-theoretic entropies of measures forming the decomposition of a given measure into ergodic components, while for α < 1 they are still infinite [21]. In particular, this means that Rényi entropies of order α < 1 are metric invariants sensitive to ergodicity. Similar generalization was made by Mesón and Vericat [11,12] for so called Havrda-Charvát-Tsallis entropy [7] and their results were similar to ones obtained by Takens and Verbitski in [20].
Our approach is based on Arimoto generalization applied to dynamical case. Instead of the Shannon function η we consider a concave function g : [0, 1] → R such that lim x→0 + g(x) = g(0) = 0 and define the dynamical g-entropy of the finite partition P as h(g, T, P) = lim sup n→∞ 1 n A∈Pn g(µ(A)).
The behaviour of the quotient g(x)/η(x) as x converges to zero appears to be crucial for our considerations. Mainly, defining and Cs(g) := lim sup we will prove that In the case of Ci(g) = ∞ we will show that in every aperiodic system and for every γ ≥ 0, there exists a finite partition P such that h(g, T, P) ≥ γ.
Taking the supremum over all partitions we obtain Kolmogorov entropy-like isomorphism invariant, which we will call the measure-theoretic g-entropy of a transformation with respect to an invariant measure. One might ask whether this invariant may give any new information about the system. We will prove (Theorem 3.2) that for g with Cs(g) < ∞, this new invariant is linearly dependent on Kolmogorov-Sinai entropy. It means that in fact the Shannon entropy function is the most natural one -not only it has all of the properties which the entropy function should have [5], but also considering different entropy functions we will not obtain essentially different invariant. This result might has the other interpretation. Ornstein and Weiss showed in [13] that every finitely observable invariant for the class of all ergodic processes has to be a continuous function of the entropy. It is easy to see that any continuous function of the entropy is finitely observable -one simply composes the entropy estimators with the continuous function itself. In other words an isomorphism invariant is finitely observable if and only if it is a continuous function of the Kolmogorov-Sinai entropy. Therefore our result implies that the generalized measure-theoretic entropy is in fact finitely observable. It should be possible to give a more direct proof of the finite observability of the generalized measure-theoretic entropy but the proof cannot be easier 1 than the proof that entropy itself is finitely observable, see [22].
The paper is organized as follows: in the next section we give a formal definition of the dynamical g-entropy and establish its basic properties. The subsequent section is devoted to the construction of a zero dynamical entropy process with a given positive g-entropy. Finally, in the last section, we define a measure-theoretic gentropy of a transformation and show connections between this new invariant and the Kolmogorov-Sinai entropy.
Basic facts and definitions
Let (X, Σ, µ) be a Lebesgue space and let g : [0, 1] → R be a concave function with g(0) = lim x→0 + g(x) = 0. 2 By G 0 we will denote the set of all such functions. Every g ∈ G 0 is subadditive, i. e. g(x + y) ≤ g(x) + g(y) for every x, y ∈ [0, 1], and quasihomogenic, i.e. ϕ g : (0, 1] → R defined by ϕ g (x) := g(x)/x is decreasing (see [17]). 3 For a given finite partition P we define the g-entropy of the partition P as For g = η the latter is equal to the Shannon entropy of the partition P. For two finite partitions P and Q of the space X we define a new partition P ∨ Q (join partition of P and Q) consisting of the subsets of the form B ∩ C where B ∈ P and C ∈ Q. The join partition of more than two partitions is defined similarly.
1 Benjamin Weiss personal communication 2 We might assume only that g(0) = 0, but then the idea of the dynamical g-entropy would fail, since if P n+1 = P n for every n and lim x→0 + g(x) = 0, then the dynamical g-entropy of the partition P would be infinite. Therefore, if g is not well-defined at zero we will assume that g(0) := lim x→0 + g(x). 3 If g is fixed we will omit the index, writing just ϕ.
Dynamical g-entropies.
For an automorphism T : X → X and a partition P = {E 1 , ..., E k } we put Now for a given g ∈ G 0 and a finite partition P we can define the dynamical g-entropy of the transformation T with respect to P as (2) h µ (g, T, P) = lim sup n→∞ 1 n H (g, P n ) .
Alternatively we will call it the g-entropy of the process (X, Σ, µ, T, P). If the dynamical system (X, Σ, T, µ) is fixed then we omit T , writing just h(g, P). As in the case of Shannon dynamical entropies we are interested in the existence of the limit of 1 n H(g, P n ) ∞ n=1 . If g = η, we obtain the Shannon dynamical entropy h(T, P). However, in the general case we can not replace an upper limit in (2) by the limit, since it might not exist. Existence of the limit in the case of the Shannon function follows from the subadditivity of the static Shannon entropy. This property has every subderivative function, i.e. function for which the inequality g(xy) ≤ xg(y) + yg(x) holds for any x, y ∈ [0, 1], but this is not true in general (an appropriate example will be given in Section 2.2). Therefore we propose more general classes of functions for which the limit exists. It exists if g belongs to one of two following classes: It is easy to show that if g is subderivative then the limit lim x→0 + g(x)/η(x) is finite. Moreover we will see that values of dynamical g-entropies depend on the behaviour of g in the neighbourhood of zero. We will prove that if g ∈ G 0 0 ∪ G Sh 0 , then there is a linear dependence between the dynamical g-entropy and the Shannon dynamical entropy of a given partition. Before we give the general result (Theorem 2.1) we will state few facts, which we will use in the proof of this theorem. We give the following lemmas ommiting their elementary proofs. 1] g(x).
The following lemma states that the value of the dynamical g-entropy is given by the behaviour of g in the neighbourhood of zero.
Proof. Let P ∈ B and g 1 , g 2 ∈ G 0 , c > 0 be fullfill the assumptions. Since g ∈ G 0 is bounded we have Dividing by n and converging to infinity we obtain We may state now the main theorem of this section.
and h(g 2 , P) < ∞, then If additionally lim sup Remark 2.1. Whenever g 2 : [0, 1] → R is a nonnegative concave function satisfying g 2 (0) = 0 and g ′ 2 (0) = ∞, we can have any pair 0 < a ≤ b ≤ ∞ as limit inferior and limit superior of g 1 /g 2 in 0, choosing a suitable function g 1 . The idea is as follows: construct g 1 piecewise linear. To do so define inductively a strictly decreasing sequence x k → 0, and a decreasing sequence of values y k = g 1 (x k ) → 0, thus defining intervals J k := [x k+1 , x k ] where g is affine. The only constraint to get a concave function is that the slope of g on each interval J k has to be smaller than y k /x k , and increasing with respect to k; this is not an obstruction to approach any limit inferior and limit superior for g 1 (x)/g 2 (x), provided that x k+1 > 0 is choosen small enough.
Proof of Theorem 2.1. Let P ∈ B. Suppose that g ∈ G 0 and g ′ (0) < ∞. Then which completes the proof of point 1. To prove point 2 let g 1 , g 2 ∈ G 0 be such that We will assume that lim sup The estimation of the lower boundary for h(g 1 , P) remains correct if we omit this assumption. Since g is subadditive, the sequence (H(g, P n )) ∞ n=1 is nondecreasing and there exists the limit lim n→∞ H(g 2 , P n ). If it is finite, then h(g 2 , P) = 0 and by (3) and Lemma 2.1 we have Since lim sup Therefore we can assume that lim Using (3) for every n > 0 we get and A∈Pn: µ(A)<δ and from (4) we obtain Converging with n to infinity we obtain: Therefore we obtain the assertion. In the case of infinite limit superior of the quotient g 1 (x)/g 2 (x) we can repeat the above reasoning just omitting an upper bound for considered expressions.
g 2 (x) and using similar arguments we obtain point 3.
Using similar arguments we might obtain the answer in the case of infinite limit lim x→0 + g 1 (x)/g 2 (x) and positive dynamical g 2 -entropy: Theorem 2.2. Let g 1 , g 2 ∈ G 0 be such that lim x→0 + g 1 (x)/g 2 (x) = ∞ and let a finite partition P has positive g 2 -entropy. Then h(g 1 , P) is infinite.
2.2.
Case of g ∈ G ∞ 0 . We will prove that for every g ∈ G ∞ 0 , any aperiodic automorphism T and every γ ∈ R there exists a partition P ∈ B such that h(g, P) ≥ γ. Since we omit the assumption of ergodicity we will use different techniques mainly based on the well-known Rokhlin Lemma which guarantees existence of so called Rokhlin towers of given height, covering sufficiently large part of X. Using such towers we will find lower estimations for g-entropy of a process similar to ones obtained by Frank Blume in [2], [3], where he proposed, for a given sequence (a n ) ∞ n=1 converging to infinity slower than n, a construction of a partition into two sets P, for which lim n→∞ H(P n )/a n = ∞.
If M 0 , . . . , M n−1 ⊂ X are pairwise disjoint sets of equal measure, then τ = (M 0 , M 1 , . . . , M n−1 ) is called a tower. If additionally M k = T −(n−k−1) M n−1 for k = 1, . . . , n − 1, then τ is called Rokhlin tower. 4 By the same bold letter τ we will denote the set n−1 k=0 M k . Obviously µ(τ ) = nµ(M n−1 ). Integer n is called the height of tower τ . Moreover for i < j we define a subtower In aperiodic systems there exist Rokhlin towers of a given length and covering sufficiently large part of X: . If T is an aperiodic and surjective transformation of Lebesgue space (X, Σ, µ), then for every ε > 0 and every integer n ≥ 2 there exists a Rokhlin tower τ of height n with µ(τ ) > 1 − ε.
Our goal is to find a lower bound for the dynamical g-entropy of a given partition. For this purpose we will use Rokhlin towers and we will calculate dynamical g-entropy with respect to a given Rokhlin tower. This leads us to the following definition: Let P be a finite partition of X and F ∈ Σ, then we define the (static) g-entropy of P restricted to F as H F (g, P) := B∈P g(µ(B ∩ F )).
The following lemma gives us estimation for H(g, P) from below by the value of g-entropy restricted to a subset of X. Lemma 2.5. Let g ∈ G 0 . Let P be a finite partition such that there exists a set E ∈ P with 0 < µ(E) < 1. If F ∈ Σ, then Proof. By the mean value theorem we have , for any set of measure smaller or equal to 1/2, where x A 0 ∈ (µ(A ∩ F ), µ(A)).
The following lemma will play important rule in the proof of the main theorem of this section where α is some positive number. Then there exist δ > 0 and s ∈ (0, α) such that Proof. Nonnegativity of g for x ∈ [0, α] and its concavity imply that there exists s ∈ (0, α) such that g is nondecreasing in [0, s]. Fix n ∈ N and E ∈ Σ. There exists δ ∈ (0, s) such that Let F ∈ Σ be such that µ(E△F ) ≤ δ. Then for every A ∈ P E n i B ∈ P F n we have (6) |µ It is easy to see that for x ∈ [0, s] the monotonicity and subadditivity of g implies that (7) |g(y) − g(x)| ≤ g(|y − x|).
Define D s = {i ∈ {1, . . . , m} | µ(A i ) < s i µ(B i ) < s}. From (5), (6), (7) and monotonicity of g in [0, s] we obtain To find the lower bound for the g-entropy of a partition we will construct so called independent sets. We construct this set in the following way: Let τ be a tower of height m. We divide the highest level of this tower (M m−1 ) into two sets of equal measure let say I (m−1) and M m−1 \I (m−1) . Next we consider T −1 I (m−1) and T −1 (M m−1 \I (m−1) ). We divide each of them into two sets of equal measure and obtain sets I We can make this construction since every aperiodic system do not have atoms of positive measure and in every non-atomic Lebegue space for every measurable set A and every α ∈ [0, α] there exists B ⊂ A such that µ(B) = α. Now we give an estimation from below for the g-entropy restricted to a given Rokhlin tower. First, by P I we will denote a partition into two sets {I, X\I}, for a measurable set I. Then the following lemma is true.
Theorem 2.3. Let g ∈ G ∞ 0 and T be an aperiodic, surjective automorphism of a Lebesgue space (X, Σ, µ) and let γ ∈ R. Then there exists a partition P ∈ B such that h(g, P) ≥ γ.
Proof. We will prove that for any γ > 0 there exists a partition P E = {E, X\E} such that h(g, P) ≥ γ. We define recursively a sequence of sets E n ∈ Σ. Let Let n > 0 and assume that we have already defined E n−1 , N n−1 and δ n−1 . Using Lemma 2.6 we can choose δ n > 0 such that for any F ∈ Σ, for which µ(E n−1 △F ) < 2δ n . Since we can choose such N n ∈ N that (10) ϕ δ n 2 −Nn−1 ϕ η (δ n 2 −Nn−1 ) > 2γ δ n log 2 .
By Lemma 2.4 there exists M n ∈ Σ, such that τ n = M n , T M n , . . . , T 2Nn−1 M n is a Rokhlin tower of measure µ(τ n ) = δ n . Let I n ⊂ τ n be an independent set in τ n and E n := (E n−1 \τ n ) ∪ I n . Then for all positive integers n. By (8) we have δ n < 2 −n and we conclude that (1 En ) ∞ n=0 is a Cauchy sequence in L 1 (X). Therefore there exist E ∈ Σ such that 1 En converges to 1 E . For this set we have Since E n ∩ τ n = I n , applying (9) and Lemmas 2.5, 2.7 we obtain that for N n such that δ n · 2 −Nn−1 < s: From (10) we obtain that For any s ≤ t and block [ω 0 , . . . , ω t−s ] with a i ∈ A we define a cylinder C t s (ω 0 , . . . , ω t−s ) = {x ∈ X : x i = ω i−s for i = s, . . . , t}. We consider the Borel σ-algebra with respect to the metric, which is given by d(x, y) = 2 −N , where N = min{|i| : x i = y i }. One can show that Borel σ-algebra is the minimal σ-algebra containing all cylindrical sets. Let p = (p 1 , . . . , p k ) be a probability vector, i.e. p i ≥ 0 for any i and Σp i = 1. We define a measure ρ = ρ(p) on A by setting ρ({i}) = p i . Then µ p is a corresponding product measure on X = A Z . Thus, the static g-entropy of a partition P A = {[1], [2], . . . , [k]} is equal to H µp g, P A n = ω∈A n g µ(C n−1 0 (ω 0 , . . . , ω n−1 )) = ω∈A n g p ω 0 · · · p ω n−1 , where ω = (ω 0 , . . . , ω n−1 ). By the concavity of the function g we have where equality holds only when p = p * = 1 k , . . . , 1 k . Before calculating the dynamical g-entropy of the partition P A with respect to measure µ p * , we give the following lemma, which proof will be given later: and for any κ > 1.
Therefore, applying Lemma 2.8 for the partition P A and κ = k we obtain Remark 2.2. If we consider lower limit instead of the upper limit we would obtain lim inf Therefore we can not replace an upper limit by the limit in the definition of the dynamical g-entropy.
Proof of Lemma 2.8. We will show the equality for the upper limit. Proof of the equality for the lower limit is similar. Let (x n ) ∞ n=1 and (m n ) ∞ n=1 be such that lim sup n→∞ g(x n )/η(x n ) = c and x n ∈ (κ −mn , κ −mn+1 ) for every n ∈ N. Then − log x n ≥ − log κ −mn+1 . Every function g ∈ G 0 is quasihomogenic, so for every positive integer n occurs g(x n )
Kolmogorov-Sinai entropy like invariant
The basic tool in the ergodic theory is Kolmogorov-Sinai entropy defined as a supremum of Shannon dynamical entropies over all finite partitions: It is invariant under metric isomorphism. Following the Kolmogorov proposition we take the supremum over all partitions of dynamical g-entropy of a partition. For a given system (X, Σ, µ, T ) we define (12) h µ (g, T ) = sup and call it the measure-theoretic g-entropy of transformation T with respect to measure µ.
It is easy to see that it is an isomorphism invariant. Ornstein and Weiss [13] showed the striking result that measure-theoretic entropy is the only finitely observable invariant for the class of all ergodic processes. More precisely -every finitely observable invariant for a class of all ergodic processes is a continuous function of entropy. Of course in the case of g ∈ G 0 0 ∪ G Sh 0 by Corollary 2.2 we have We will show that for a wider class of functions, namely for functions for which for any ergodic transformation T . This shows that the measure-theoretic g-entropy is in fact finitely observable: one might simply compose the entropy estimators [22] with the linear function itself. Our proof will be similar to the proof of [20, Thm 1.1] where Takens and Verbitski showed that for ergodic transformations supremum over all finite partitions of dynamical Rényi entropies of order α > 1 are equal to the measure-theoretic entropy of T with respect to measure µ. Let us introduce necessary definitions. Let T i be automorphisms of Lebesgue space (X i , Σ i , µ i ) for i = 1, 2 respectively. Then we say that T 2 is a factor of transformation T 1 , if there exists a homomorphism φ : X 1 → X 2 such that φT 1 = T 2 φ µ 1 a.e. on X 1 .
Suppose that T 2 is a factor of T 1 under homomorphism φ. Then for an arbitrary finite partition P of X 2 we have Hence h(g, T 2 , P) = h(g, T 1 , φ −1 P). Therefore This implies the following proposition: Proposition 3.1. If T 2 is a factor of T 1 , then for every function g ∈ G 0 h µ (g, T 2 ) ≤ h µ (g, T 1 ).
3.1.
Measure-theoretic g-entropies for Bernoulli automorphisms. An automorphism T on (X, Σ, µ) is called Bernoulli automorphism if it is isomorphic to some Bernoulli shift. The crucial role in the proof of the main theorem of this section (Theorem 3.2) will play a well-known theorem due to Sinai: [19]). Let T be an arbitrary ergodic automorphism of some Lebesgue space (X, Σ, µ). Then each Bernoulli automorphism with h µ (T 1 ) ≤ h µ (T ) is a factor of the automorphism T .
The following proposition will play a crucial role in our considerations: It is easy to see that h µ (σ) = log M. From Theorem 3.1 we conclude that σ is a factor of T . Therefore applying formula (11) we obtain Applying Lemma 2.8 completes the proof.
3.2.
Main theorem. Our goal in this section is the following result: Theorem 3.2. Let T be an ergodic automorphism of Lebesgue space (X, Σ, µ), and g ∈ G 0 be such that Cs(g) ∈ (0, ∞) Then If g ∈ G 0 0 , then h µ (g, T ) = 0. If g ∈ G 0 is such that Cs(g) = ∞ and T has positive measure-theoretic entropy, then h µ (g, T ) = ∞.
To prove Theorem 3.2 we need first few preliminary lemmas.
Therefore lim inf
k→∞ c k ≤ m lim sup n→∞ a n , and this is equivalent to the statement h(g, T m , P) ≤ mh(g, T, P).
Taking supremum over all finite partitions we obtain the assertion.
Next lemma will be just a weaker version of Theorem 3.2.
Lemma 3.2. If an automorphism T m of a Lebesgue space (X, Σ, µ) is ergodic for every m ∈ N, then for every function g ∈ G 0 , such that Cs(g) < ∞ holds h µ (g, T ) = Cs(g) · h µ (T ).
Proof. Case of g ∈ G 0 0 follows from Corollary 2.2. Suppose that there exists such g ∈ G 0 \G 0 0 which fullfills assumptions of lemma and for which we have Then applying Lemma 3.1 to the transformation T m and using equality h µ (T m ) = mh µ (T ) (see [9,Thm 4.3.16]) we obtain Therefore for sufficiently large m there exists an integer M for which (15) h µ (g, T m ) ≤ mh µ (g, T ) < Cs(g) log M ≤ m Cs(g)h µ (T ) = Cs(g)h µ (T m ).
Proposition 3.2 applied to the transformation T m guarantees that for every g ∈ G 0 with positive (finite) Cs(g) we have (16) h µ (g, T m ) ≥ Cs(g) log M.
Comparing (15) and (16) we obtain the contradiction, which implies that Proof of Theorem 3.2. If h µ (T ) = 0 theorem is true, since for any partition P we have 0 ≤ h(g, P) ≤ Cs(g)h(P) = 0.
Since T ′ is a factor of T , so Proposition 3.1 implies that Cs(g)h µ (T ) = Cs(g)h µ (T ′ ) = h µ (g, T ′ ) ≤ h µ (g, T ) ≤ Cs(g)h µ (T ) which completes the proof of the case of finite h µ (T ). If h µ (T ) = ∞, then Proposition 3.2 implies that h µ (g, T ) ≥ Cs(g) log M for every M > 0 and the theorem is proved.
3.3. Generator theorem counterpart. In the case of g ∈ G ∞ 0 there is no counterpart of a Kolmogorov-Sinai generator theorem, which says that the measuretheoretic entropy of the transformation T is realised on every generator of the σ-algebra Σ. Let us consider Sturm shifts -shifts which model translations of the circle T = [0, 1). Let β ∈ [0, 1) and consider the translation φ β : [0, 1) → [0, 1) defined by φ β (x) = x + β ( mod 1). Let P denote the partition of [0, 1) given by P = {[0, β), [β, 1)}. Then we associate a binary sequence to each t ∈ [0, 1) according to its itinerary relative to P; that is we associate to t ∈ [0, 1) the biinfinite sequence x defined by x i = 0 if φ i β (t) ∈ [0, β) and x i = 1 if φ i β (t) ∈ [β, 1). The set of such sequences is not necessary closed, but it is shift-invariant and so its closure is a shift space called Sturmian shift. If β is irrational, then Sturmian shift is minimal, i.e. there is no proper subshift. Moreover for a minimal Sturmian shift, the number of n-blocks which occur in an infinite shift space is exactly n + 1. Therefore for zero-coordinate partition P A , which is a finite generator of σ-algebra Σ and for any function g ∈ G 0 we have H(g, P A n ) = A∈P A n g(µ S (A)) ≤ ϕ 1 n + 1 where µ S is the unique invariant measure for Sturm shift. Thus, h(g, P A ) ≤ lim sup n→∞ n + 1 n g 1 n + 1 = 0.
On the other hand since it is strictly ergodic (and thus aperiodic) Theorem 2.3 implies that for any function g ∈ G ∞ 0 h µ (g, T ) = ∞, therefore we have a finite generator, for which the supremum is not attained. | 7,011.2 | 2013-02-26T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Growth and crystallographic feature-dependent characterization of spinel zinc ferrite thin films by RF sputtering
ZnFe2O4 (ZFO) thin films exhibiting varying crystallographic features ((222)-epitaxially, (400)-epitaxially, and randomly oriented films) were grown on various substrates by radio-frequency magnetron sputtering. The type of substrate used profoundly affected the surface topography of the resulting ZFO films. The surface of the ZFO (222) epilayer was dense and exhibited small rectangular surface grains; however, the ZFO (400) epilayer exhibited small grooves. The surface of the randomly oriented ZFO thin film exhibited distinct three-dimensional island-like grains that demonstrated considerable surface roughness. Magnetization-temperature curves revealed that the ZFO thin films exhibited a spin-glass transition temperature of approximately 40 K. The crystallographic orientation of the ZFO thin films strongly affected magnetic anisotropy. The ZFO (222) epitaxy exhibited the strongest magnetic anisotropy, whereas the randomly oriented ZFO thin film exhibited no clear magnetic anisotropy.
Background
Recently, spinel-structured ferrite oxides have been intensively investigated because of their versatile physical and chemical properties as well as technological applications in magnetic sensors, biosensors, and photocatalysts [1,2]. ZnFe 2 O 4 (ZFO) is one of the major ferrite oxides with a spinel structure, and it has remarkable magnetic and electromagnetic properties regarding its state of chemical order and cation site occupancy in lattices [3]. Moreover, it is also a semiconductor, processes light response, has photochemical characteristics, and can be used as a material for supercapacitors [4,5].
ZFO in various forms, such as powders, films, and various nanostructures, prepared using different methodologies have been reported [6][7][8]. Many ZFO nanostructures can be used as versatile building blocks for fabricating functional nanodevices; however, integrating the reported methodologies for preparing nanostructured ZFO into Si-based semiconductor device processes remains a challenge. ZFO in thin-film form is promising and is compatible in the fabrication of devices with Si semiconductors. Yamamoto et al. prepared ZFO thin films on a single-crystal sapphire substrate by using pulsed laser deposition and examined the effect of the deposition rate on its magnetic properties [9]. ZFO thin films with a microlevel scale were grown on glass substrates by radio-frequency (RF) sputtering at room temperature, and the magnetic properties of the films were investigated [10]. Ogale et al. used a pulsed laser evaporation method to synthesize ZnO and Zn x Fe 3−x O 4 mixed-phase thin films on sapphire substrates using ZnFe 2 O 4 pellets; however, this is not an efficient method for obtaining single-phase spinel ZFO thin films [11]. Polycrystalline ZFO films were also prepared by spinspray deposition; however, controlling the film thickness to be less than several hundred nanometers is challenging [12]. Although several groups have proposed the fabrication of ZFO films using versatile methodologies, the sputtering technique is promising for preparing oxide thin films with excellent crystalline quality and controllable film thickness for device applications because it is a technique that enables large-area deposition and easy process control [13,14]. It is well known that crystallographic features affect the properties of versatile oxide films [13,15]. However, the crystallographic feature-dependent properties of sputtering-deposited spinel ZFO thin films are still inadequate. This might obstruct applications of such films in devices. In this study, ZFO thin films were grown on various singlecrystal substrates by RF sputtering to fabricate ZFO thin films with varying crystallographic features. The correlation between the crystallographic features and the characterization of the ZFO thin films was investigated.
Methods
ZnFe 2 O 4 (ZFO) thin films were grown on yttriastabilized zirconia (YSZ) (111), SrTiO 3 (STO) (100), and Si (100) substrates, using RF magnetron sputtering. The yttria content in YSZ substrates was 15%. The sputtering ceramic target adopted in the experiment was prepared by mixing the precursor oxide powders of ZnO and Fe 2 O 3 to obtain a proportion of Fe/Zn = 2, pressing the powders into a pellet, and sintering the pellet at a high temperature to achieve a high density. The thickness of the ZFO thin films was fixed at approximately 125 nm, and the growth temperature was maintained at 650°C. The gas pressure of deposition was fixed at 30 mTorr, using an Ar/O 2 ratio of 2:1 for the films. The atomic percentages of the as-deposited films were calculated based on the X-ray photoelectron spectroscopy (XPS) spectra of the Zn2p, Fe2p, and O1s regions. The chemical binding states of the constituent elements of the ZFO thin films were also investigated.
The crystal structures of the samples were investigated using X-ray diffraction (XRD), applying Cu Kα radiation. The surface morphology of the ZFO films was determined using scanning electron microscopy (SEM) and atomic force microscopy (AFM) at an area of 1 μm 2 . The detailed microstructures of the as-synthesized samples were characterized using high-resolution transmittance electron microscopy (HRTEM). The composition analysis was performed using an energy-dispersive X-ray spectrometer (EDS) attached to the TEM. Thin slices for cross-sectional TEM analysis were prepared using a dual-beam focused-ion-beam (FIB) instrument. The areas selected for cutting with an ion beam were protected by an amorphous carbon overlayer. Adjust the beam currents to mill initial trenches, thin the central membrane, and polish for electron transparency of membrane. Finally, FIB milling was used to capture a free membrane from trenches for a TEM analysis. The room temperaturedependent photoluminescence (PL) spectra were captured using the 325-nm line of a He-Cd laser. A superconducting quantum-interference device magnetometer was used to measure the magnetic properties of the samples. Figure 1 displays the X-ray diffraction (XRD) patterns of the ZFO thin films grown on various substrates. The XRD patterns show several sharp and intense Bragg reflections originating from the ZFO structure (according to JCPDS No. 89-1012), confirming that the ZFO thin films exhibited excellent crystalline quality. The absence of ZnO and Fe x O y phases in the XRD patterns indicated that an exceptional ZFO compound was formed. The ZFO films grown on the YSZ and STO substrates exhibited highly (222) and (400) crystallographic orientations, respectively. By contrast, the film grown on the Si substrate was randomly oriented. Most of the grains on the ZFO thin film grown on the Si substrate were (311)-oriented and some were (220)-oriented. The lattice constants of the ZFO thin films were derived from the observed Bragg reflections and were independent of the substrate types used in this study. The lattice constants of the ZFO thin films were approximately 0.843 nm, and this value was similar to that of its bulk counterpart (approximately 0.844 nm) [16], indicating that the highly oriented ZFO thin films were not affected by lattice distortion of the substrates (caused by a lattice mismatch between film and substrate). This might be attributed to the film thickness (approximately 125 nm), which markedly exceeded the critical value for misfit strain relaxation [17,18].
Results and discussion
The atomic percentage of the Fe/Zn and binding states of the Zn and Fe constituent elements for the asdeposited ZFO thin film was evaluated based on the narrow-scan XPS spectra of Zn and Fe. The Fe/Zn atomic ratio was approximately 2.04, and this ratio is similar to the Fe/Zn stoichiometric composition of the ZFO. Figure 2a shows a Zn2p narrow-scan XPS spectrum. The binding energies of Zn2p 3/2 and Zn2p 1/2 were 1,020.7 and 1,043.7 eV, respectively. These binding energies are close to the reported values of the binding state of Zn 2+ [19]. The core-level spectrum of Fe had a 2p 3/2 binding energy of approximately 711.1 eV (Figure 2b). Moreover, a clear broad shake-up satellite of binding energy at approximately 719.1 eV was observed. The energy difference between the 2p 3/2 and 2p 1/2 was approximately 13 eV in this study. These features were mainly associated with the Fe 3+ binding state in the ZFO [20]. A shoulder at approximately 709.5 eV was observed in the Fe-XPS spectrum, which might be associated with iron atoms in the ZFO lattices that were bonded in Fe 2+ status [21]. A symmetric O1s spectrum was observed for the as-deposited ZFO thin film (Figure 2c). The Gaussian-resolved results showed that the spectrum consisted of two peak components. The first was centered at approximately 529.7 eV and was attributed to the oxygen in the ZFO crystal. The second was centered at approximately 531.1 eV, representing the oxygen ions in the oxygen-deficient regions. The formation of oxygen vacancies in the sputtered ZFO thin films was attributed to the oxygen-deficient environment during thin-film preparation [22]. The nonstoichiometric oxygen content in the ZFO thin film supported the observation of the Fe-core-level spectrum that Fe 2+ and Fe 3+ coexisted in the ZFO. Figure 3 shows the SEM images of the ZFO thin films grown on the various substrates. The morphologies of the ZFO thin films differed depending on the substrate on which they were grown. The surface of the ZFO grown on the YSZ substrate was dense and comprised tiny grains (Figure 3a). Most of the grains were in a rectangular morphology with a size of approximately 100 to 130 nm. The surface of the ZFO film grown on the STO substrate consisted of numerous tiny grooves (Figure 3b). These grooves were approximately 20 to 30 nm. Clear three-dimensional (3D) bar-like grains homogeneously covered the surface of the film grown on the Si substrate (Figure 3c). The size range of these bar-like grains was 150 to 200 nm; these grains were large in comparison with those of the other samples. The detailed surface microstructures of the ZFO thin films were further analyzed by using an atomic force microscope (AFM). A considerable portion of the surface of the ZFO thin film grown on the YSZ substrate was observed to be flat and had a root-mean-square (RMS) surface roughness of 0.49 nm (Figure 3d). The many dark spots distributed over the AFM surface image indicated that numerous tiny sunken regions were present on the ZFO surface (Figure 3e). This surface feature contributed to an RMS surface roughness of 1.19 nm on the STO. Figure 3f shows spiral-shaped surface grains covering the surface of the ZFO thin film grown on the Si substrate. The distinct 3D granular structure of this ZFO surface caused the surface to be relatively rough. The RMS surface roughness was 15.21 nm.
The low-magnification cross-sectional transmission electron microscopy (TEM) image (Figure 4a) of the ZFO thin film grown on the YSZ substrate revealed a dense and flat film with no macroscopic imperfection; the total thickness of the ZnO layer was approximately 125 nm. The EDS analysis in Figure 4a confirmed the presence of Zn, Fe, and O in the film, and the atomic ratio of Fe/Zn (2.02) was close to the stoichiometric ratio of the ZFO. The clear and ordered spots in the electron diffraction pattern (DP) taken from the film-substrate region ( Figure 4b) exhibited that the growth of the ZFO film on the YSZ substrate was <111 > ZFO //<111 > YSZ and <110 > ZFO //<110 > YSZ . Figure 4c presents the cross-sectional high-resolution (HR) TEM image of the ZFO film grown on the YSZ substrate; the corresponding fast Fourier transform (FFT) patterns captured from the ZFO film, film-substrate interface, and YSZ are also shown in the insets. The interface between the ZFO and the YSZ contained a thin transition layer. Above this layer, an ordered atomic arrangement was observed, revealing epitaxial growth of the ZFO on the YSZ substrate. Figure 4d shows the low-magnification crosssectional TEM image of the ZFO film grown on the STO substrate. The film was dense; however, several tiny grooves were observed on the film surface, and this resulted in a more rugged surface compared with that of the film grown on the YSZ substrate. The DP pattern taken from the film-substrate region is shown in the inset of Figure 4d, which revealed that the growth of the ZFO film on the STO substrate was <100 > ZFO //<100 > STO and <110 > ZFO //<110 > STO . The HR image (Figure 4e) showed that the ZFO had clear and ordered lattice fringes, indicating that the film was of high crystalline quality and that the interface between the ZFO and STO was atomically sharp; no intermediate phase was observed at the interface. By contrast, for the ZFO grown on the Si substrate, the low-magnification TEM image (Figure 4f ) reveals that the ZFO film consisted of a clear column-like structure. The surface was rough. The DP pattern comprised ordered spots from the Si and many tiny randomly distributed spots and rings from the ZFO film. The ZFO film had a polycrystalline structure. The HR image and FFT patterns in Figure 4g show that the ZFO grains had different crystallographic orientations, and clear boundaries were present among the grains. According to the results of TEM analyses, the ZFO thin film grown on the Si substrate was more structurally defective than were the ZFO (222) and ZFO (400) epitaxial films. Figure 5 shows the room-temperature photoluminescence spectra of the ZFO thin films grown on the various substrates. A broad peak in the visible emission range and a maximum of approximately 560 to 580 nm were observed for the ZFO thin films. A blue emission band at approximately 468 nm was observed in the Zn-Fe-O compound that had interstitial zinc defects [23]. In the XPS analysis, a symmetrical Zn2p spectrum revealed that there were no excess Zn interstitials in the ZFO lattices, and hence, no such blue emission band was observed in this study. A similar broad visible band, which was attributed to deep-level emissions caused by surface-oxygen-related defects, has been widely reported in ZnO oxides [24]. Insufficient oxygen in the sputtering process generates oxygen vacancies in the ZFO oxide during crystal growth, and this might have caused surface defects in the film, further inducing a yellow emission band. Figure 6a,b,c shows the relationship between temperature (T) and magnetization (M) (zero-field-cooled (ZFC) and field-cooled (FC)) for the ZFO thin films. The M-T curves were similar among the samples. The observed increase in the M of all samples as the temperature decreased was caused by stronger A-B interaction at lower temperatures in Zn-Fe-O lattices [25]. A non-zero M value was observed up to the maximum measurement temperature (350 K) in this study. The ZFC and FC curves showed great differences in the samples below 40 K. The ZFC curves showed a broad peak with a clear summit region. This proved that the films were in a cluster glass state [26]. The spin-glass transition temperature was observed to be nearly 40 K in this study, which is in agreement with results reported in the literature [27]. The bulk ZFO had a spin-glass transition temperature (T g ) of 20 to 30 K. The ZFO thin film had a slightly higher T g value than did the bulk ZFO. This was attributed to the disordered cation distribution of Zn 2+ and Fe 3+ ions in the spinel structure [10]. Moreover, the random configuration of zinc and iron ions of the spinel structure was associated with oxygen vacancies in the lattices [9]. The XPS analysis results showed that the sputtering-deposited ZFO thin films herein had some degree of oxygen vacancy, which might have contributed to the observed M-T results. Figure 7a,b,c displays the magnetization loops of the ZFO thin films grown on various substrates. The magnetic hysteresis loops were recorded at 30 K with the applied field H parallel (H p ) and perpendicular (H v ) to the film surface. At a measurement temperature of 30 K, the remanence was evident for all samples. Up to 6,500 Oe, the magnetization was far from being saturated. The M-H behavior clearly showed ferromagnetic coupling because of the A-O-B superexchange interaction. Some Fe 3+ ions occupied the tetrahedral A-sites and activated the A-B superexchange interaction in the mixed spinel type [28]. When the field was applied parallel to the film surface, the magnetic hysteresis of the ZFO thin film grown on the YSZ substrate was more square than that of the films grown on the STO and Si substrates. The remnant magnetization was 5.5 × 10 −4 emu/cm 2 , and the coercive field was 311 Oe. Moreover, when the field was applied perpendicular to the film surface, the hysteresis loop of the ZFO (222) epitaxy was the least square among those of all of the samples. The remnant magnetization was 8.2 × 10 −5 emu/cm 2 , and the coercive field was approximately 140 Oe. The difference in the coercive field values when the field was parallel and perpendicular to the film surface was immense for the ZFO (222) epitaxy, whereas that for the randomly oriented ZFO thin film was small (randomly oriented ZFO thin film: H cp = 161 Oe and H cv = 171 Oe). The magnetic hysteresis loops in parallel and perpendicular directions were separating, indicating the presence of magnetic anisotropy for the ZFO thin films on the various substrates. The ZFO (222) epitaxy exhibited the strongest magnetic anisotropy. For the spinel ferrite, the easy axis of magnetization was <100>, and the difficult axis was <111 > [29]. When the field was applied perpendicular to the surface of the ZFO (222) epitaxial film, the field was parallel to the difficult magnetization axis [222] of the ZFO. This caused a less-square magnetic hysteresis loop of the ZFO (222) epitaxial film compared with that when the field was applied parallel to the film surface. A similar magnetic hysteresis loop was observed for the ZFO thin film grown on the Si substrate when the field was applied parallel and perpendicular to the film surface. This was attributed to the random orientation of the magnetic grains in the thin film [30]. This was supported by the structural analyses that the ZFO thin film grown on the Si substrate had a random crystallographic feature.
Conclusions
ZFO spinel thin films exhibiting epitaxially and randomly oriented crystallographic features were grown on various substrates by RF magnetron sputtering at 650°C. The XRD and TEM results indicated that growing the ZFO thin films on the YSZ (111) and STO (100) substrates promoted the formation of (222) and (400) epitaxial films, respectively. The film grown on the Si substrate exhibited a polycrystalline structure. The surface morphology of the ZFO thin film substantially depended on its crystallographic features. The SEM and AFM images demonstrated that the surface of the ZFO (222) epitaxial film was flat and smooth; however, the surface of the randomly oriented film was rough and exhibited 3D grains. The visible emission bands of the ZFO thin films were attributed to growth-induced oxygen vacancies. The ZFO thin films demonstrated a spinglass transition temperature of approximately 40 K. The ZFO (222) epitaxial film exhibited the most marked magnetic anisotropy among the samples. | 4,331.6 | 2013-12-19T00:00:00.000 | [
"Materials Science"
] |
Erratum: A first-order dynamical transition in the displacement distribution of a driven run-and-tumble particle (2019 J. Stat. Mech. 053206)
We present here a revised version of the appendices of Gradenigo and Majumdar (2019 J. Stat. Mech. 053206). Some minor corrections are introduced and a new simplified argument to obtain the critical value of rc, the control parameter for the transition, is presented. The overall scenario and the description of the transition mechanism depicted in Gradenigo and Majumdar (2019 J. Stat. Mech. 053206) remains completely untouched, the only relevant dierence being the value of rc fixed to rc = 2 1/3 = 1.259 92 . . . rather than rc = 1.3805 . . .. This dierence also implies a small quantitative changes in figures 2 and 4; a new version of both figures is reported here. A couple of other typos discovered in the paper are pointed out and the correct version of the expressions are reported. G Gradenigo and S N Majumdar
Amendments to appendix B
In this erratum, we report a corrected version of the appendix B of [1], including dierent subsections of appendix B, i.e. B.1-B.3. In section B.1, the mechanism for choosing the correct root is pointed out, and furthermore, some algebraic errors have been corrected in section B.2. This analytically gives the correct value of r c = 2 1/3 = 1.259 92 (instead of the old value of r c = 1.3805 which was numerically obtained in the published version). Consequently the correct value of z c = 11.7771.. replaces the old numerical value z c ≈ 12.0. This change of z c appears clearly in the new figure 4 of this erratum, where the dotted vertical line is clearly shifted to the left with respect to the same figure in the published version [1]. The argument to obtain r c = 2 1/3 is presented in section B.3. In order to facilitate the comparison to the figures of the present manuscript we have given the same numbers as in [1]. Finally, we thank N Smith for pointing out the algebraic error in appendix B.2 of the published version.
B. Derivation of the rate function χ(z) in the intermediate matching regime
In this appendix we study the leading large N behavior of the integral that appears in the expression for P A (z, N ) in equation (56) of [1]: where z 0 can be thought of as a parameter and with σ 2 = 2 + 5E 2 . It is important to recall that the contour Γ (+) is along a vertical axis in the complex s-plane with its real part negative, i.e. Re(s) < 0. Thus, we can deform this contour only in the upper left quadrant in the complex s plane (Re(s) < 0 and Im(s) > 0), but we cannot cross the branch cut on the real negative axis, nor can we cross to the s-plane where Re(s) > 0. A convenient choice of the deformed contour, as we will see shortly, is the Γ (+) rotated anticlockwise by an angle π/2, so that the contour now goes along the real negative s from 0 to −∞.
To evaluate the integral in equation (1), it is natural to look for a saddle point of the integrand in the complex s plane in the left upper quadrant, with fixed z. Hence, we look for solutions for the stationary points of the function F z (s) in equation (2). They are given by the zeros of the cubic equation (3) As z 0 varies, the three roots move in the complex s plane. It turns out that for z < z l (where z l is to be determined), there is one positive real root and two complex conjugate roots. For example, when z = 0, the three roots of equation ( at s = (2Eσ 2 ) −1/2 e iφ with φ = 0, φ = 2π/3 and φ = 4π/3. However, for z > z l , all the three roots collapse on the real s axis, with s 1 < s 2 < s 3 . The roots s 1 < 0 and s 2 < 0 are negative, while s 3 > 0 is positive. For example, in figure B1, we plot the function F z (s) in equation (3) as a function of real s, for z = 12 and E = 2 (so σ 2 = 2 + 5E 2 = 22). One finds, using Mathematica, three roots at s 1 = −1/2 (the lowest root on the negative side), s 2 = −0.175 186 . . . and s 3 = 0.129 732 . . .. We can now determine z l very easily. As z decreases, the two negative roots s 1 and s 2 approach each other and become coincident at z = z l and for z < z l , they split apart in the complex s plane and become complex conjugates of each other, with their real parts identical and negative. When s 1 < s 2 , the function F z (s) has a maximum at s m with s 1 < s m < s 2 (see figure B1). As z approaches z l , s 1 and s 2 approach each other, and consequently the maximum of F z (s) between s 1 and s 2 approaches the height 0. Now, the height of the maximum of F z (s) between s 1 and s 2 can be easily evaluated. The maximum occurs at s = s m where F z (s) = 0, i.e. at s m = −(Eσ 2 ) −1/3 . Hence the height of the maximum is given by Hence, the height of the maximum becomes exactly zero when Thus we conclude that for z > z l , with z l given exactly in equation (5), the function F z (s) has three real roots at s = s 1 < 0, s 2 < 0 and s 3 > 0, with s 1 being the smallest negative root on the real axis. For z < z l , the pair of roots are complex (conjugates). However, it turns out (as will be shown below) that for our purpose, it is sucient to consider evaluating the integral in equation (1) only in the range z > z l where the roots are real and evaluating the saddle point equations is considerbaly simpler. So, focusing on z > z l , out of these three roots as possible saddle points of the integrand in equation (1), we have to discard s 3 > 0 since our saddle points have to belong to the upper left quadrant of the complex s plane. This leaves us with s 1 < 0 and s 2 < 0. Now, we deform our vertical contour Γ (+) by rotating it anticlockwise by π/2 so that it runs along the negative real axis. Between the two stationary points s 1 and s 2 , it is easy to see (see figure B1) that F z (s 1 ) > 0 (indicating that it is a minimum along real s axis) and F z (s 2 ) < 0 (indicating a local maximum). Since the integral along the deformed contour is dominated by the maximum along real negative s for large N, we should choose s 2 to be the correct root, i.e. the largest among the negative roots of the cubic equation z + σ 2 s − 1/(2Es 2 ) = 0. Thus, evaluating the integral at s * = s 2 (and discarding pre-exponential terms) we get for large N where the rate function χ(z) is given by J. Stat. Mech. (2020) 049901 The right hand side can be further simplified by using the saddle point equation (3), i.e. z + σ 2 s 2 − 1/2Es 2 2 = 0. This gives (8)
B.1. Asymptotic behavior of χ(z)
We now determine the asymptotic behavior of the rate function χ(z) in the range z l < z < ∞, where z l is given in equation (5). Essentially, we need to determine s 2 (the largest among the negative roots) as a function of z by solving equation (3), and substitute it into equation (8) to determine χ(z). We first consider the limit z → z l from above, where z l is given in equation (5). As z → z l from above, we have already mentioned that the two negative roots s 1 and s 2 approach each other. Finally at z = z l , we have s 1 = s 2 = s m where s m = −(Eσ 2 ) −1/3 is the location of the maximum between s 1 and s 2 . Hence as z → z l from above, s 2 → s m = −(Eσ 2 ) −1/3 . Substituting this value of s 2 in equation (8) gives the limiting behavior as announced in the first line of equation (24) in [1].
To derive the large z → ∞ behavior of χ(z) as announced in the second line of equation (24) in [1], it is first convenient to re-parametrize s 2 and define Substituting this into equation (3), it is easy to see that θ z satisfies the cubic equation Note that due to the change of sign in going from s 2 to θ z , we now need to determine the smallest positive root of θ z in equation (11). In terms of θ z , χ(z) in equation (8) reads The formulae in equations (11)-(13) are now particularly suited for the large z analysis of χ(z). From equations (11) and (12) it follows that in the limit z → ∞ we have that b(z) → 0, so that θ z → 1. Hence, for large z or equivalently small b(z), we can obtain a perturbative solution of equation (11). To leading order, it is easy to see that with b(z) given in equation (12). Substituting this into equation (13) gives the large z behavior of χ(z) The dependence on N of the prefactor in the right hand side of both equations (56) and (85) in [1] is wrong, 1/ √ N 1/3 must be replaced with N 5/6 . In fact, the correct expression to be considered in place of equation (56) whereas the correct expression to be considered in place of equation (85) | 2,451.4 | 2020-04-14T00:00:00.000 | [
"Physics"
] |
Infinite Distances and the Axion Weak Gravity Conjecture
The axion Weak Gravity Conjecture implies that when parametrically increasing the axion decay constants, instanton corrections become increasingly important. We provide strong evidence for the validity of this conjecture by studying the couplings of R-R axions arising in Calabi-Yau compactifications of Type IIA string theory. Specifically, we consider all possible infinite distance limits in complex structure moduli space and identify the axion decay constants that grow parametrically in a certain path-independent way. We then argue that for each of these limits a tower of D2-brane instantons with decreasing actions can be identified. These instantons ensure that the convex hull condition relevant for the multi-axion Weak Gravity Conjecture cannot be violated parametrically. To argue for the existence of such instantons we employ and generalize recent insights about the Swampland Distance Conjecture. Our results are general and not restricted to specific examples, since we use general results about the growth of the Hodge metric and the sl(2)-splittings of the three-form cohomology associated to each limit. ar X iv :1 90 5. 00 90 1v 1 [ he pth ] 2 M ay 2 01 9
Introduction
A generic feature of four-dimensional string compactifications is the existence of scalar fields with approximate shift symmetries appearing in the effective action. Such axions appear in many phenomenologically motivated models, including models of cosmic inflation, in which the axions are considered as inflaton fields [1,2]. The axion decay constants f , determined by the kinetic term of the axions, are of crucial importance and, depending on the precise application, required to have a certain range of values. For example, in large field inflationary models large values for f , comparable to the Planck mass, are often required. It has been suggested long ago, starting with [3], that large axion decay constants cannot be obtained in string theory, while at the same time controlling the instanton corrections that need to be included in a consistent formulation of the effective theory. This observation is formalized in the axion Weak Gravity Conjecture [4], which states that the axion decay constant times the action of an appropriate instanton coupling to the axion is smaller than the Planck mass times the instanton charge. Clearly, it is notoriously hard to establish such a conjecture, since it requires to control the behaviour of the axion decay constants and then reliably determine the non-perturbative corrections to the effective action as recently reviewed in [5].
Recently, the Swampland Distance Conjecture [6] has received much attention . 1 It states that an infinite tower of modes becomes exponentially light when approaching a point that is at infinite geodesic distance in field space. In particular, the recent constructions [15,23,27] suggest that effective theories near such infinite distance points exhibit universal properties that can be investigated model-independently and quantitatively. As a consequence, one might thus argue that any quantum gravity conjecture should first pass its validity tests in the limits in field space that lie at infinite geodesic distance. We will see in this work that this is also a fruitful path to test the axion Weak Gravity Conjecture and uncover new mechanisms that can be relevant in understanding the underlying reasons for its validity. In particular, note that it was argued for the Swampland Distance Conjecture in [15] that the number of relevant states that have to be included into the effective theory has to grow with a certain rate depending on properties of the infinite distance point. In the present context we will find the analog statement concerning the growth rate of the number of instantons that need to be included in the effective theory when approaching the infinite distance point.
Our focus in this work will be on the dynamics of the R-R three-form axions in Type IIA string theory compactified on a Calabi-Yau threefold. These axions are part of the N = 2 hypermultiplet field space of the effective theory. The hypermultiplet moduli space of Type II string theory compactification has been studied in detail, see e.g. [34,35] for reviews, or [36][37][38][39][40][41][42][43][44][45][46][47][48] for some papers on this subject. The classical dimensional reduction shows that their kinetic terms, and hence their square axion decay constants, are proportional to the Hodge star metric in the Calabi-Yau manifold. This implies that they generally depend very non-trivially on the complex structure moduli of the Calabi-Yau threefold. It will be the first task of this work to descibe their behaviour near infinite distance points in correlation with the general classification of infinite distance points in the complex structure moduli space [23,49]. We will find that the axion decay constants for some of the axions can, in general, grow to become increasingly large in the infinite distance limits. Instanton corrections should then become relevant in order that the axion Weak Gravity Conjecture is not violated parametrically. We suggest that these corrections stem from D-brane states and identify candidate D2-brane instanton states wrapped on three-cycles of the Calabi-Yau manifold that non-trivially modify the effective theory at such points. Our construction follows the ideas of [15,23] to characterize a tower of D3-brane states wrapped on three-cycles that gives the relevant particles for the Swampland Distance Conjecture in Type IIB compactifications.
The axion Weak Gravity Conjecture has already been investigated in various ways, see for instance [25,31,[50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65]. One of the main challenges in addressing the axion Weak Gravity Conjecture arises when dealing with higher-dimensional field spaces. In this case one does not expect a simple direct link between the instanton actions and the axion decay constants. In fact, the kinetic term of the axions will generally be given in terms of a non-diagonal field-dependent metric and instanton actions will generally not align with any diagonalization attempt. A related issue in each higher-dimensional setting is the fact that we have to deal with path dependence when approaching an infinite distance point. In particular, the kinetic term for a certain axion might grow along one specific path, but might stay finite along another. It is one of our main tasks to address these general issues for the Type IIA setting under consideration. We will argue that there is a natural basis for the axions that is adapted to the infinite distance locus under consideration. More precisely, this special basis will arise from the fact that we can non-trivially associate an sl(2, C) n -algebra to each infinite distance locus reached by sending n coordinates into a limit [66,67]. This algebra acts on the three-forms defining the axion basis and thus splits the axion space into subspaces. We can then generally determine the growth of these subspaces when reaching the infinite distance point. This allows us to focus on the set of axions that have decay constants that grow parametrically in a certain path-independent way. This paper is structured as follows. In section 2 we first review the classification of infinite distance points in complex structure moduli space. We then introduce a special real three-form basis adapted to the sl(2, C) n -algebra associated to the infinite distance limit and discuss the growth of the associated Hodge metric. In section 3 we then recall some of the recent insights about the Swampland Distance conjecture for Calabi-Yau moduli spaces [15,23] and adapt the presentation to the special three-form basis of section 2, whereafter we will generalize the stability argument presented in [15] to multi-variable settings. The axion Weak Gravity Conjecture will then be addressed in section 4. We identify a candidate tower of D2-brane instantons which prevents the parametric violation of this conjecture.
Note added: While we were in the process of writing up this paper the reference [31] appeared, which has some overlap with our work, but is in many respects complementary. On the one hand, our Type IIA treatment is general and does not focus on particular infinite distance limits, while [31] focuses on specific limits in the large volume regime. On the other hand [31] gives an interesting discussion on the modifications of the effective theories, while we will not address the modifications of the effective theory in the present work.
Infinite distances in Calabi-Yau moduli spaces
The aim of this work is to study the couplings of R-R axions in the four-dimensional effective theory. To motivate the use of the mathematical tools introduced in this section, let us first briefly recall the relevant structures in Type IIA Calabi-Yau compactifications. In this case we consider the axions ξ I arising from expanding the R-R field C 3 into a basis γ I of the third cohomology group H 3 (Y 3 , R) via C 3 = ξ I γ I . (2.1) The four-dimensional kinetic terms are readily derived to be 2 where e D is the four-dimensional dilaton and * is the Hodge star of the Calabi-Yau threefold. Note that the metric G IJ , crucial in defining the axion decay constants as we discuss below, non-trivially depends on the complex structure moduli through * .
In this section we will discuss the techniques needed to analyze limits in the moduli space of Calabi-Yau manifolds in which the axion metric G IJ grows. We will not be able to introduce the complete mathematical theory relevant to answer these questions, but rather constrain ourselves to stating some of its main results from [49,66,67,70,71]. For a proper mathematical review on the foundations of this subject see e.g. [72]. More details are also provided in [23].
On the geometry of the complex structure moduli space
In order to set the stage for the later discussions, let us first recall some basic properties of the complex structure moduli space M cs . This moduli space is spanned by the complex structure deformations that preserve the Calabi-Yau property and has complex dimension h 2,1 , where h p,q = dim(H p,q (Y 3 , C)) are the Hodge numbers of Y 3 . Since M cs (Y 3 ) admits a special Kähler structure its metric can be derived from a Kähler potential K. Let us introduce local coordinates z I with I = 1, . . . , h 2,1 . Furthermore, recall that the Calabi-Yau threefold Y 3 admits a unique (3, 0)-form Ω(z) that varies holomorphically in these coordinates z I . Ω can be used to define the metric on M cs via the Kähler potential Next we introduce a real, integral basis γ I for H 3 (Y 3 , Z), with I = 1, . . . , 2h 2,1 + 2. This allows us to decompose Ω in its periods Π I as Furthermore, we can construct a skew-symmetric product ·, · on H 3 (Y 3 , C), or component-wise an anti-symmetric pairing matrix η, given by 3 with v, w ∈ H 3 (Y 3 , C). Then we can express the Kähler potential as Note that due to the skew-symmetry of ·, · one can choose a symplectic basis γ I = (αK, βL) withK,L = 0, . . . , h 2,1 . This basis satisfies the following properties αK, βL = δL K , αK, αL = βK, βL = 0. (2.7) As we will discuss in detail in section 2.4 there is a very special choice of such a symplectic basis associated to the considered point in moduli space when analyzing asymptotic limits.
Limits in the complex structure moduli space
We next discuss the relevant limits in this complex structure moduli space M cs . It will turn out that the metric G IJ of the axions (2.2) can only grow unboundedly if we approach a point on in M cs at which the Calabi-Yau manifold Y 3 degenerates. Well-known examples of such degeneration points are the conifold point or the large complex structure point, but our analysis will be completely general and include also higher-dimensional degeneration loci. It can be shown that one can blow-up M cs in such a way that the subspaces at which Y 3 degenerates can locally be described as the vanishing locus of n coordinates z 1 = ... = z n = 0. 4 Instead of working with the z I we will introduce new coordinates t i = 1 2πi log z i such that the limits of interest are given by t 1 , . . . , t n → i∞ , ζ κ fixed , (2.8) where ζ κ are the coordinates that are not taken to a limit. Figure 1: Two paths γ 1 , γ 2 towards the limiting point t 1 → i∞ (z 1 = 0) and t 2 → i∞ (z 2 = 0), respectively. The paths can lie in different growth sectors, since either the growth of y 1 or y 2 dominate in these cases. We also indicated the associate log-monodromy matrices N 1 , N 2 .
The growth of the axion metric G IJ in the limits (2.8) will in general depend on the precise path we are taking to these limiting points. The mathematical machinery we intend to use does provide us with general growth estimates in case we first divide the space into sectors, so-called growth sectors, and then demand that the considered path lies within one of such growth sectors at least for sufficiently large y i = Im t i . One of such growth sectors is given by where we can chose arbitrary positive λ, δ. Other growth sectors can be obtained by the same expression but with permuted y i . Clearly, if a path lies within one of these sectors we can relabel the y i such that the respective sector is given by (2.9). In figure 1 we illustrate two paths for the case that two t i are send to the limit. Let us stress that the requirement that the considered path resides in one growth sector introduces a mild path dependence into our analysis, since we exclude paths which are so complicated that they always pass through multiple sectors.
It is a famous mathematical result [66] that the limiting behaviour of the periods Π, and also the metric G IJ , crucially depends on the monodromy matrix T i associated to the t i = i∞ point. This monodromy matrix appears if one asks how Π transforms under t i → t i − 1 for some index i, i.e. one has Π(. . . , t i − 1, . . .) = T i Π(. . . , t i , . . .) . (2.10) The monodromy matrices T i turn out to possess several very useful properties. Firstly, all T i associated to the limit (2.8) commute with each other. Secondly, if a T i possesses a non-trivial unipotent part, it defines a nilpotent matrix 5 The N i also form a commuting set of matrices and one has ηN i = −N T i η. The nilpotent orbit theorem of [66] allows us to express Π in terms of the nilpotent matrices 6 where we sum in the exponential over i = 1, ..., n. Here a 0 is a holomorphic function in the coordinates that are not send to a limit (2.8). Note here that the exponential yields a polynomial in t i , since the N i are nilpotent matrices. The important statement is that the vector A is holomorphic in z A = e 2πit A and ζ, which allows for the above expansion with leading term a 0 . This vector a 0 determines the asymptotic behavior of Π in the limit (2.8), since the other terms will be suppressed if we take Im t i to be large. a 0 naturally defines a so-called nilpotent orbit given by This nilpotent orbit is the starting point for our analysis of the asymptotic regions in M cs .
Classifying infinite distance limits
The information captures by Π nil or (N i , a 0 ) can be used to classify infinite distance limits.
Recall that the distance between two points P, Q along a path γ is determined by the integral where in the complex structure moduli space one has to use the metric g kl = ∂ t k ∂tlK determined from (2.6). In order that the geodesic distance between points P, Q is infinite, every path γ between P, Q has to be of infinite lengths. This can only potentially happen if one of the points, say P , is located at one of the loci t 1 = ... = t n = i∞. However, not every such locus is at infinite distance. In fact, using the nilpotent orbit (2.13) and the properties of a 0 one shows that [71] P at infinite distance : N (n) a 0 = 0 , (2.15) 5 In the following we will assume that we have transformed the variables z i and t i , such that only the uni-potent part of Ti is relevant in the transformation (2.10). This procedure causes us to lose some of the information about the monodromies of orbifold singularities, but the aspects crucial to the infinite distances are retained. 6 Note that this statement is true up to an overall holomorphic rescaling of Π. Such rescalings yield to a Kähler transformation of K given in (2.6). Unless otherwise indicated the following discussion is invariant under such rescalings.
as discussed in detail in [23]. Here we have defined N (n) = N 1 + ... + N n , but stress that every linear combination of the N i with positive coefficients could equally be used.
It was shown in [15,23] that it is crucial to actually distinguish several cases of infinite distance limits. In order to do that one needs to analyze the the properties of N (n) and η. In fact, one can analyze the occurring singularity for any step sending t 1 , ..., t i → i∞, for i = 1, ..., n, associating an N (i) = N 1 + ... + N i . For each such pair (N (i) , η) one finds one of 4h 2,1 types of limiting behaviours denoted by [49] One can show that the type at each step i also determines the highest integer d i such that with d i = 0, 1, 2, 3 for the four types I, II, III, and IV, respectively. We have included these labels in It will be these limits in which we will study the behaviour of the axion metric G IJ given in (2.2).
A special three-form basis and the sl(2)-splitting
Having classified the infinite distance limits in M cs we next want to connect this information with the axion metric (2.2) for the axions ξ I arising in the expansion (2.1). In order to do that it turns out to be very useful to introduce a special basis γ I for H 3 (Y 3 , R), which is adapted to the limiting locus that we approach. More precisely, the basis will depend on the following set of data: (1) the monodromy matrices N i relevant for the considered limit; (2) the limiting vector a 0 relevant for the considered limit; (3) the growth sector (2.9) in which the considered path resides.
The rough idea is to split up H 3 (Y 3 , R) into a direct sum of smaller subspaces whose elements have a particular growth in the fields y i = Im t i approaching infinity. Furthermore, one finds a 'limiting Hodge metric', in which these spaces are orthogonal. At first, this seems like an impossible task, since even the approximate periods (2.13) contain contain numerous mixed terms due to the general form of the N i and a 0 . However, there is the famous formalism of [66,67] that allows to systematically approach this problem.
The in-genius idea of [67] is to reformulate this structure such that it non-trivially 'decomposes' into sl(2, C)-blocks that decouple in a well-defined sense. More precisely, [67] constructs a set of n mutually commuting sl(2, C)-triples acting on H 3 (Y 3 , R) form the above local data (1)-(3). We will not describe the steps to actually perform this construction, but refer the reader to [23] for a detailed review and the study of an explicit example. Let us simply assert that we went through the relevant steps and introduce the commuting sl(2, C)-triples : These triples satisfy the standard commutation relations We can now use these triples to split H 3 (Y 3 , R) into eigenspaces. Let us introduce where i ∈ {0, ..., 6} are integers representing the eigenvalues of We have denoted by E the set of all possible vectors labelling non-trivial V and collecting all eigenvalue combinations of (Y (1) , ..., Y (n) ). Let us stress that the range of i labelling non-empty V is correlated with the type of limit associated to (N (i) , η) when sending t 1 , ..., t i → i∞ as listed in Note that using the fact that the singularity can only increase sending more t i to the limit we find that the range of i successively increases with i.
One can derive several interesting properties of the vector spaces V . Most important for us is the fact that dimV = dimV 6− , where we abbreviated 6 = (6, ..., 6). This implies that we can one-to-one identify a basis vector of V and V 6− . Furthermore, these spaces V satisfy certain orthogonality relations, as follows from by using (2.21) and that ·, Y (i) · = − Y (i) ·, · . Namely, it implies that the product between these spaces can only be non-zero if i + r i = 6. And since this should hold for all i, the vector spaces V satisfy the orthogonality property V , V = 0 unless + = 6 . It should now be clear from (2.23), (2.25), and the skew-symmetry of the inner product ·, · that the V define naturally a special symplectic basis with the properties (2.7).
Importantly the limiting vector a 0 turns out to not generally fall into one of the spaces V ⊗ C of the splitting (2.20). However, as shown in [67] and discussed in detail in [23], there are always two real matrices ζ , δ which rotateã 0 = e ζ e −iδ a 0 , such that with d i defined in (2.17). Crucially, this construction is such that one also finds that the complex conjugate ofã 0 lies in V d , such that The vectorã 0 generally depends on the remaining coordinates ζ κ not taken to a limit in (2.8).
It can be used to define the so-called Sl(2)-orbit This orbit asymptotically approximates the nilpotent orbit (2.13) in the limit y 1 y 2 , ..., y n−1 y n , y n → ∞ and x i = 0.
It should be stressed that many key properties of our later constructions are contained in this very non-trivial sl(2)-split (2.20) of H 3 (Y 3 , R). One of these properties, namely the growth of the Hodge metric of this basis, we will discuss next.
Asymptomatic behavior of the Hodge norm
One of the remarkable applications of the sl(2)-splitting, which we introduced in section 2.4, is to obtain an asymptotic expression for the Hodge metric that appears, for example, in the definition (2.2) of the axion metric G IJ .
Let us first introduce some notation and define the Hodge norm of a three-forms v ∈ H 3 (Y 3 , C) by (2.29) As stressed above, the Hodge star * in general depends very non-trivially on the complex structure moduli. In order to make this dependence explicit, one can decompose v into its components in H p,q (Y 3 , C), with p + q = 3. The individual components can then be expressed in terms of the period vector Π and its Kähler-covariant derivatives. It turns out that one can control the asymptotic of the periods Π in the limits (2.8) which then leads to an asymptotic expression for the Hodge metric. 7 To make the asymptotic form of the metric explicit, we first introduce a limit Hodge norm [67,72] v 2 While we will not define * ∞ in any detail, let us record some of its properties. Firstly, · ∞ is finite in the limit (2.8), since * ∞ does no longer depend on the fields t 1 , ..., t n . 8 It is adapted to the splitting of the vector space H 3 (Y 3 , R) introduced in (2.20) via the orthogonality relations V , * ∞ V = 0 unless = . (2.31) The norm (2.30) can be used to give an asymptotic expression for the original Hodge norm (2.29) in the limit (2.8). Decomposing a general real three-form u into its components u ∈ V one has [67,70] This expression will be our main tool in the rest of this paper to evaluate the growth of the axion metric G IJ and the associated D2-brane instanton actions.
The Swampland Distance Conjecture for Calabi-Yau moduli
In this section we revisit the recent constructions of [15,23] that provided strong evidence for the validity of the Swampland Distance Conjecture (SDC) for all infinite distance limits in M cs (Y 3 ). The crucial observation of these works is, that one can relate the classification of infinite distance points, recalled in section 2.3, with the existence of a tower of D3-branes wrapped on three-cycles of Y 3 with masses becoming exponentially light when approaching the infinite distance points.
One of the main tasks in establishing such a picture is the search for suitable three-cycles that can host such states. While it will not add much to the strategy presented in [15,23], we will reformulate and generalize the statements using the special basis introduced in section 2.4. This reformulation turns out to be an elegant way of stating the findings and will serve as a prelude to section 4, where we will consider axion decay constants and Euclidean D2-branes wrapping three-cycles of Y 3 . Furthermore, we will generalize the stability properties of the D3-brane states of [15], where they studied the one-parameter setting, to the multi-parameter infinite distance limits considered in [23], which will play an important role in the test of the axion Weak Gravity Conjecture in section 4.
Construction of the D3-brane states
Let us begin by introducing the necessary basic properties of the three-cycles that can host the D3-brane states required to satisfy the SDC. In order to specify the state that we obtain from wrapping a D3-brane on a three-cycle of Y 3 , we will give its Poincaré dual three-form Since we are mainly interested in the mass of this state we will not discuss the quantization of Q in the following. 9 Since we are performing a Calabi-Yau compactification the four-dimensional theory is an N = 2 supergravity theory. Our aim is to considere candidate charges Q that correspond to BPS states. This non-trivial assertion will have severe consequences, as we discuss below. In the following we will first establish the conditions on Q in order that the corresponding D3-branes state becomes light in an infinite distance limit t 1 , ...t n → i∞ introduced in (2.8).
Given a BPS D3-brane state with charge Q we can compute its mass by evaluating its central charge using M (Q) = |Z(Q)|, with the central charge given by where Π are the periods appearing in (2.6) and ·, · , · were defined in (2.5), (2.29). We are interested in finding the charge vectors Q such that the states become massless in the infinite distance limit. Since * Π = −iΠ we can apply Cauchy-Schwarz inequality to find the following upper bound on the mass of the state This implies that a sufficient condition for becoming light in the limit (2.8) is that Q → 0. Classifying in a higher-dimensional moduli space the states that admit such a behaviour for a general path approaching the infinite distance point is clearly challenging. However, as we will see in the following the machinery introduced in section 2 allows us to do this for all paths that reside in a single growth sector.
Let us now consider a path with (2.8) that approaches an infinite distance point and eventually resides in the growth sector (2.9). In this case we can apply the growth result (2.32). The requirement that Q behaves as Q → 0 along each such path, leads us to define the vector space where we recall that V are the vector spaces introduced in (2.20). If we require Q ∈ V light we thus have a corresponding state that becomes massless at the infinite distance point. Let us remarks in order here. Firstly, the requirement Q is a sufficient condition for masslessness, since Q gives an upper bound for M (Q). In other words, there could be states that become massless in the infinite distance limit whose charges are not in V light . 10 Secondly, the set V light labels states that are massless along any path in the considered growth sector. Along special paths there could be more states becoming light than captured by V light . Let us already mention that this path-independence requirement will be equally relevant when studying the axion decay constants and thus will be discussed further in section 4.
Next we want to make a distinction between two types of components for Q, such that we can write it as where the subscripts indicate that we are dealing with Type G states and Type F states, respectively. 11 We define this split, such that the former are intimately linked to the presence of gravity, while the latter are also present in purely field theoretic settings. Concretely, we require that Type G states become massless as a power law in the y i , whereas Type F states do so at an exponential rate. These growth rates can be inferred from the product of the charge vectors Q with the nilpotent orbit Π nil . Namely, by using the nilpotent orbit approximation for Π we neglect exactly the exponential terms that determine the mass of Type F states. Then the conditions for Type F and Type G states can be phrased as Note that this does not define a unique split. The set of Type F states defines a vector space. However, Q G should be viewed as representing an equivalence class, since we are free to add any Type F charges to a Type G state. This last feature will be crucial in the context of the axion Weak Gravity Conjecture later on.
Although Type G states do not form a vector space, we can still write down a basis of representatives {qÎ }Î =0,1,... for the Type G charges. However, relating such a basis to the subspaces V turns out more difficult. Let us therefore derive a sufficient condition that determines a set of Type G vectors. The main issue is that the nilpotent orbit approximation relies on the vector a 0 , whereas it is Reã 0 and Imã 0 that have a definite location (2.27) in the spaces V . To relate the nilpotent orbit and the Sl(2)-orbit (2.28) directly, we must take a special limit. Namely, if we take the limit y 1 y 2 , . . . , y n−1 y n , y n → ∞ we can replace Π nil by Π Sl (2) . Now let us apply this to the defining property of a Type F state. Since it should hold for all values of y i , we find the following implication We can use the negation of this statement to find Type G states. Namely, it tells us that . Thus the search for a basis of Type G states can be partly fulfilled by finding vectors q that satisfy Π Sl(2) , q = 0. And because Π Sl (2) can be expressed in terms of a 0 , we can directly relate such vectors to the vector spaces V . We can thus use (3.7) to derive a relevant set of Type G states.
With these preliminaries, we are now able to further discuss the construction of the charge vectors Q for the infinite tower of states. We already noted that we will restrict our considerations to charges Q ∈ V light defined in (3.3). Furthermore we pointed out that only the Type G charge of these states is relevant in the context of the SDC. These can be determined using the condition (3.7), and the polarization constraint where d = (d 1 , ..., d n ) was given in (2.27). Hence, we see that by using the sl(2, C) n -algebra a set of candidate vectors generating the Type G states is given by acting sufficiently many times with N − 1 , ..., N − n on the vectors Reã 0 and Imã 0 , such that the resulting vectors are in V light . In order to do that it is convenient to recall that and the location of Reã 0 and Imã 0 is V d+3 as given in (2.27). Let us denote by {qÎ }Î =0,1,... the set of charge vectors obtained by acting with N − 1 , ..., N − n on Reã 0 , Imã 0 that are located in V light .
Having constructed representatives qÎ of all Type G states, we can check their respective growths by using (2.32). By definition of V light all such states will have decreasing norm · when approaching the singularity. Let us denote by q 0 the state with the slowest decrease. In order to do that we consider the growth of a vector q I to be smaller or equal to the growth of q J , if there exists a finite constant γ such that along every path approaching t 1 , ..., t n → i∞ in the considered growth sector (2.9). It is not hard to check in examples that this gives a well-defined transitive order among the qÎ , since they are in V light and constructed from Reã 0 , Imã 0 . It is, however, important to stress that there are cases in which there is no unique element with the slowest decrease. We then can consider the set of elements with the slowest decrease and pick any element calling it q 0 .
We have now the sufficient preparation to introduce the infinite set of charge vectors that we will consider. We thus define with m I some integer coefficients. For simplicity, we will take the m I to be non-negative in the rest of this paper. It was argued in [15] that the tower of states relevant to the SDC should arise from increasing the numbers m I . The intuitive argument for this statement was by considering stability of BPS states, i.e. by asking if the states (3.11) labelled by m I can possibly decay. For the general expression (3.11) of Q G (q 0 |m I ) stability is very hard to analyze. However, as noted in [15] the situation improves if the charges Q G (m I ) can be represented as an orbit In this case one can argue for a stability argument by using the phase shifts and we will discuss this in more detail in section 3.2. However, as we will see below and was already pointed out in [15], such an orbit does not exist for every type of limit. In these cases one can still write down a tower of states (3.11) labelled by integers m I by using several Type G states with the same growth. However, in such cases one loses the stability arguments valid for the orbit (3.12) and different arguments would have to be employed. 12 We stress here, that a slightly more involved the construction of (3.11) and (3.12) was suggested in [23]. Namely, it was shown that there exists a natural construction of the orbit and hence the q I if the type of the singularity enhances further when sending more than n coordinates to a limit. We will not need this construction in the following when working with (3.11).
Having discussed the Type G component Q G of the charge vectors, let us next turn to the Type F component Q F . As mentioned above, we are in principle free to add any Type F charges to our charge vector, since these will only result in additional terms for the central charge Z(Q) that are exponentially suppressed. Thus from the perspective of the SDC, we do not have to keep track of such components of the state. However, it will be crucial from the perspective of the axion WGC to include these Type F charges for the state. We can therefore consider the following generalized charge vector where the prime indicate that we should exclude the basis vectors q 0 , q I in the sum over the Type F charge vectors. This means that the Type F charges m α exclude these components as well.
On the stability of the D3-brane states
Now that we have discussed which D3-brane states become light in the infinite distance limit, we want to examine the stability properties of this tower of states. In [15] it was already found for one-parameter Type IV infinite distance limits that, at a given instance along the limit, only a finite number of these states are stable against decays. Let us denote this finite number by m I crit for the charge generated by q I , which indicates the critical length of our tower of states. They found that this critical length scaled as m 1 crit ∼ y 1 , such that the length of the tower increases as we move further along the infinite distance limit, and that the tower becomes of infinite size as we send y 1 → ∞. Here we will generalize this feature to multi-parameter infinite distance limits, which will play an important role in the test of the axion Weak Gravity Conjecture in section 4.
The arguments made in [15] relied crucially on aspects of N = 2 BPS states, and thus so will ours. Let us therefore begin by shortly recalling their stability properties, and refer to the original articles [73][74][75][76][77][78][79][80][81][82][83]. Consider three BPS states A, B and C, with their charge vectors denoted by q A , q B and q C respectively. We want to study the situation where state C is unstable against decay into anti-stateĀ and state B. The charge vectors of these states must satisfy with qĀ = −q A . The masses of these states satisfy which follows from the linearity of the central charge in the charges. The state C then becomes unstable against decay if this inequality is satisfied. This statement can be made more explicit by considering the (normalized) phase of the central charge The alignment of the phases ϕ(B) = ϕ(Ā) then indicates that C becomes unstable against the decay intoĀ and B, which corresponds to The loci of this equation in M cs are called curves of marginal stability, such that if we cross this curve in the moduli space, the state C is only marginally stable against decay. In essence, this boils down to a restriction of this phase to a range (−1, 1) in order for the state C to remain stable, as discussed in more detail in [75,78] in the context of so-called stable pairs. This aspect played an important role in the argument presented in [15], where they generated the states of the tower by circling the infinite distance point until the phase of the central charge began to rotate and they thus crossed such a curve of marginal stability. Then the number of windings before they encountered this curve set the critical length m 1 crit of the tower. It is important to mention that the product statesĀ, B cannot be chosen arbitrarily such that (3.14) holds, but they must be mutually non-local as well, that is, qĀ, q B = 0 [73,74].
Phrased oppositely, it tells us that two states can only form a bound state if their charges are mutually non-local. Then for states that lie in V light it is useful to recall that, by use of orthogonality relations (2.25), this vector space satisfies the following property Therefore no two states that lie in V light can form a bound state together. In particular, this tells us that any two states Q(q 0 |m I , m ), Q(q 0 |m I , m ) in our infinite tower are mutually local, such that they cannot bound together. It also indicates that the product states that result from the decay of a state Q(q 0 |m I , m ) of our tower must necessarily have charges that do not lie in V light to have a non-zero intersection product between these states.
Before we argue for stability properties of our tower of states, it will prove to be useful to examine the asymptotic properties of the central charge of Type G charge vectors in more detail.
To be more precise, we want to consider Type G charges that are constructed out ofã 0 by applying lowering operators N − i , which will always be realized in our constructions. Therefore, we want to rewrite the nilpotent orbit (2.13) using the sl(2, C)-data. This can be done by rewriting the orbit as [67,72] where α(x, y) is some overall coefficient function that accounts for the freedom to rescale the periods. Crucially, the nilpotent orbit contains the matrix-valued function which encodes the asymptotic behaviour in the limit y 1 , ..., y n → ∞ and is the origin of the scaling in (2.32). The complex matrix-valued function p(y) is a polynomial in (y 1 /y 2 ) −1/2 , ..., (y n−1 /y n ) −1/2 , (y n ) −1/2 with constant term 1. Note that the other terms in this polynomial can be bounded by factors of λ −1/2 within the growth sector (2.9).
Then consider the central charge Z(q) of a Type G charge q that is located in a single eigenspace V . For simplicity let us set x = 0, since we want to determine the growth in the coordinates y i . Then we obtain By properties of the symplectic product ·, · we can move e −1 to the other side of the product as e, and application on q then results in the same growth behavior as one finds for q using (2.32). Inserting (3.19) into the expression for e K/2 we then obtain where θ is the overall phase inherited from α and we stress that this equation is true only up to overall numerical factors. Note that for relative phases between two central charges the factor e iθ cancels, such that the only remaining part of the phase is determined by p(y)e −iN − (n)ã 0 , q .
To simplify this expression even further, consider a Type G charge vector q that is constructed out ofã 0 by acting with lowering operators N − i . We can infer from the polarization condition Now we want to investigate how far we can move up into our tower before states start to become unstable, where we will set the Type F charges to zero for simplicity. Thus we consider a state q 0 , and look at how the phase of its central charge ϕ(q 0 ) shifts as we move up in the tower to a non-zero value m J for a Type G charge. Then we find that where we expanded this logarithm and used (3.22). Note the analogy with [15], where they circled the infinite distance loci to generate the tower of states, which resulted in these phase shifts. This expression is simply the generalized version of those phase shifts to multi-parameter infinite distance limits, which reduces to the one-parameter result q 0 / q 1 ∼ y 1 by picking q 0 = (N − 1 ) 2ã 0 and q 1 = (N − 1 ) 3ã 0 using (2.32). It hints at some critical scale such that the phase of the central charge Z(Q G (q 0 |m J ) potentially shifts by 1 2 . If this phase shift actually occurs, depends on whether the central charges of q 0 and q I differ in phase, as can be seen in the second line of (3.24). For charge vectors given in orbit form (3.12) this phase difference is always realized, since q J = N − J q 0 , and thus the numerator picks up a factor of i less than the denominator by use of the polarization condition (3.8). For charge vectors that cannot be written in orbit form, one needs to go more carefully through the polarization conditions (3.8).
We can then see this shift in phase of the central charge Z(Q G (m J )) as we increase m J as an indication that the states will become unstable after a certain critical scale m J crit . Namely, we know from the stability properties of BPS states that a state can become unstable when phases of central charges rotate, and we have Thus the phase of Z(Q G (m J )) must rotate in the region given by q 0 / q J ∼ m J , such that relative phase between product states can potentially rotate out of the stable range (−1, 1), and Q G (m I ) then becomes unstable. Therefore it is a sufficient condition to require m J m J crit ∼ q 0 / q J to ensure the stability of the states that we consider at a given instance along the limit. Note that different arguments would have to be employed for the growth of the tower in the case that the norms of q 0 and q J have the same growth behavior. However, for the purposes of this work such properties will not be needed.
We expect a similar story to hold for the Type F charges of our states, that is, the charges m must have some upper bound m crit as well. This bound can be motivated from the fact that their contribution to the central charge vanishes asymptotically by construction (3.5), since Type F charges enter the central charge via the exponentially suppressed corrections to the nilpotent orbit approximation (2.13). Then the upper bound m crit for Type F charges should grow such that contributions to the central charge still vanish asymptotically, which suggests scales comparable to exponential growth.
One-parameter infinite distance limits
It is instructive to briefly review the properties of the towers of states that arise in one-parameter infinite distance limit. This means that we consider Type II b , III c , and IV d infinite distance limits with one parameter y 1 . A first task is to determine the dimensions of the vector spaces V . Using appendix A.1 and the classification in [23,49], we readily find the result listed in Table 3 The next step is to find the vectors that generate the Type G charges. As discussed in the section 3.1 they satisfy Here Π Sl(2) can be expressed in terms of nilpotent matrices N − 1 and the vectorã 0 . Therefore it will prove to be useful to recall some polarization conditions for this vectorã 0 . To be begin with, we know that with d = 1, 2, 3 for a Type II, III or IV infinite distance limit respectively. Furthermore, we have thatā 0 =ã 0 for a Type IV infinite distance limit, such thatã 0 is real. And for a Type II or III singularity we have an additional polarization condition that tells us that Together these relations suffice to identify basis vectors q 0 , q 1 for the Type G charges of our charge vector Q, which have been included in table 3.1. We should note that q 0 , q 1 ∈ V 3−d for Type II and III infinite distance limits, whereas q 0 ∈ V 2 and q 1 ∈ V 0 for a Type IV singularity. Now let us discuss the stability of the tower of states for each of these infinite distance limits, that is, we will comment on the critical size m 1 crit of the tower of states at a given instance along the infinite distance limit. For Type IV infinite distance limits it has already been argued that m 1 crit ∼ y 1 in [15], which is in agreement with our discussion in section 3.2. In contrast, for Type II and III infinite distance limits we cannot apply directly the stability arguments after (3.24), since q 0 , q 1 have the same growth. In specific examples, one might be able to argue for an m crit using global properties of the moduli space [27,30]. Fortunately, we will not need an expression for m crit in these cases.
The Weak Gravity Conjecture for R-R axions
In this section we study the couplings of R-R axions in the context of the Weak Gravity Conjecture (WGC) for axions. Let us therefore shortly recall this conjecture [4]. In the case that one has only one axion, the conjecture states that there exists an instanton coupled to the axion such that 14 f S ≤ qM p , where f denotes the axion decay constant, S is the instanton action, and q is the instanton charge. For a given axion ξ, with a periodic field range ξ ∼ = ξ + 2π, we define its decay constant f via its kinetic term The instanton charge is defined by noting that axion couples to the instanton via the exponential e −S+iqξ . This implies that the periodicity of this contribution is 2π/q and the field range of the canonically normalized axion 2πf /q.
The formulation of the axion WGC becomes significantly more subtle in a higher-dimensional axion space. In order to treat such cases requires us to introduce the vectors where f IJ are the inverse axion decay constants, S a instanton action of an instanton labelled by a, and q aJ encodes the axion coupling such that the instantons are weighted by a factor exp(−S a + iq aJ ξ J ). The axion decay constants are defined to diagonalize the metric of the axions as G IJ = (f T ·f ) IJ . It was suggested in [50,52] that the generalization of (4.1) is the statement that the convex hull that is spanned by ±z a = ±z I a γ I contains the unit ball (i.e. with radius M −1 p ), where γ I are the basis vectors in axion space normalized such that the axion ξ I has a 2π periodicity. It is important to notice that there are various refinements of this conjecture which propose stronger conditions [4,15,62,64,65]. Most relevant for us will be the statement of the strong axion WGC, which states that the convex hull condition should be satisfied for the z a constructed from the instantons with the smallest actions. In other words, one orders the instantons coupling to some axion direction by the value of their action and only retains the one with the smallest value S a to construct the vectors (4.3).
Before we proceed with the analysis of the general multi-axion setup, let us already give a qualitative outline of what we can expect. In the four-dimensional Type IIA Calabi-Yau compactifications that we will be considering, the quantities f and S vary non-trivially over the complex structure moduli space M cs (Y 3 ). If we take an infinite distance limit in M cs , we will find that f and S have certain growth rates in the coordinates y i by use of growth properties (2.32). The axion WGC (4.1) then suggests that the parametrical growth of the axion decay constant f should be cancelled by the decrease of the instanton action S. Therefore, our task is to find, for an axion direction with a parametrically growing decay constant, an instanton that couples to this axion with an instanton action that decreases at a sufficient rate. It is at this stage that the three-cycles of Y 3 discussed in the previous section become relevant. Namely, instead of wrapping D3-branes on these three-cycles to find states that become massless in infinite distance limits, these three-cycles can now host Euclidean D2-branes with a parametrically decreasing action. We will find that we can couple every R-R axion to one of these instantons, provided that its axion decay constant f grows in a path-independent manner. The fact that our tower of instantons also grows at a certain rate specified by m crit , discussed in section 3.2, implies that the instanton charge q grows parametrically as well, which will play a crucial role in this test of the axion WGC. Namely, we will find that the decrease of the instanton action S will not always suffice to cancel the growth of the axion decay constant f completely, and that then the leftover growth is matched by the growth of the instanton charge q.
Asymptomatic axion decay constants for R-R axions
For completeness, we will first recall some basic aspects of R-R axions. As mentioned before, these axions ξ I follow from expanding the R-R field C 3 into a basis of harmonic 3-forms γ I of with the kinetic terms for these fields given by where D denotes the four-dimensional dilaton, and ·, · is defined in (2.5). Then the field range of these axions is ξ I ∼ = ξ I + 2π, such that the metric G IJ defines the axion decay constants. A suitable basis for this metric adapted to the infinite distance limit is the special threeform basis discussed in section 2.4, since it decomposes the Hodge norm in blocks that have the same growth rate (2.32). Let us therefore split up these fields into axions ξ α corresponding to the basis vectors v α , α = 1, ..., dimV , that span the vector spaces V . Collecting all v α we have, by using (2.20), a basis of H 3 (Y 3 , R) and the ξ α parameterize all axion directions. The kinetic terms for these fields are then given by which, by using (2.31) and (2.32), we can rewrite in the infinite distance limit into Some comments about (4.7) are in order here. Firstly, note that this expression is an asymptotic expression for the kinetic terms, as indicated by the symbol ∼, which can be used to bound the actual field space metric. As it is equally true for (2.32), it does not capture the numerical factors and in fact it does not follow from our considerations how large the numerical constants in (4.7) need to be chosen such that it provides a good approximation to (4.6). In fact, the more precise statement it is the actual Hodge norm is bounded by the norm asymptotic norm (2.32) when multiplied by some finite constant depending only on how close one is to the limiting point. Accordingly our results will always only gives bounds with undetermined numerical coefficients. Secondly, we note that we have used the orthogonality (2.31) among the V . This simplifies the result significantly, but needs to be read, with our first remark in mind, as the statement that the off-diagonal terms among different V are sub-dominant when considering sufficiently large y 1 , ..., y n . Nevertheless, it divides the axions into various sets that 'decouple' at least when considering the dominant growth. It is clear from (4.7) that we can make no such decoupling statements when considering axions coming from the same V . All axions from a fixed V grow with the same rate in y 1 , ..., y n .
Let us now look at the growth of the various terms in (4.7) in more detail. Clearly, depending on the values of and the considered path in the y 1 , ..., y n the kinetic terms either go to zero, stay constant, or grow to become infinitely large in the limit. Clearly, in order to test the axion WGC (4.1) we are interested in increasing kinetic terms and axion decay constants. In this work, however, we will restrict our considerations to axions whose decay constants grows in a certain path-independent way. More precisely, we will demand that f 2 grows for any path that resides in a growth sector such as (2.9). This immediately implies constraints on the integers k . Namely, we find by inspecting (4.7) together with (2.9) that the basis vectors v of the considered axions must be elements of Note, in particular, that V heavy is defined in an opposite fashion compared to the vector space V light in (3.3), which contained all vector spaces whose Hodge norm decreased along every path with (2.8) in the growth sector. In fact, we can use these two definitions to decompose the vector space H 3 (Y 3 , R) as The last part V rest are the remaining directions in the vector space decomposition. They parametrize axions whose decay constants grow, decrease, or stay constant depending on the considered path approaching the limit y 1 , ..., y n → ∞. Recall that the products between vectors out of two spaces V , V can only be non-zero if + = 6. This implies Furthermore, we find that the vector spaces V heavy and V light are dual to each other under the product ·, · and indices in the index sets E light and E heavy can be canonically identified. In other words, we find that for every vector v heavy ∈ V heavy there exists a vector v light ∈ V light such that v heavy , v light = 0 (4.11) This canonical duality between V heavy and V light will be crucial in arguing that we can couple every axion direction v ∈ V heavy to a D2-brane instanton Q ∈ V light . Now that the asymptotic behavior of the kinetic terms has been discussed in detail, we are ready to analyze the asymptomatic axion decay constants. From (4.7) we can deduce that only the couplings between axions in the same subspace V are relevant in the infinite distance limit, and that axions that reside in different subspaces decouple. Therefore the asymptomatic axion decay constants are given by where we stress that we find a matrix with blocks along the diagonal labelled by and each index I is associated with one pair , α . The axion decay constants for the individual blocks are proportional tof α ,β defined as As discussed already in the context of the kinetic terms and in section 2.5, this matrix remains finite in the infinite distance limit, and, in fact, does not depend on the coordinates y i . Thus the asymptomatic behavior in y i of the axion decay constants is captured by the power law in (4.12).
D2-brane instantons and the axion WGC
Having discussed the kinetic terms (4.6),(4.7) and the resulting axion decay constants (4.12), we now study D2-brane instantons. Recall that the world-volume action of an Euclidean D2-brane wrapping a three-cycle specified by Q is given by with Z(Q) being the central charge defined in (3.1). The coupling functions, such as the moduli space metric, of the effective theory thus receive corrections of the form e −S D2 . Expanding C 3 in ξ α v α then tells us that the charges of this instanton are given by Let us next focus on identifying candidate D2-brane instantons which have an instanton action S that decreases along the infinite distance limit. The instanton action S of a Euclidean D2-brane is given by the real part of S D2 in (4.14), from which we obtain The idea is to consider the charges Q introduced in (3.13) that described asymptotically massless D3-brane states in the SDC consideration of section 3. Wrapping Euclidean D2-branes on these three-cycles then provides us with candidate instantons that can couple to the R-R axions.
More specifically, the mass of these D3-brane states was previously given by |Z(Q)|, and it now gives us the instanton action S. Thus the fact that the D3-brane states became massless in the infinite distance limit ensures us that S is decreasing as well, and we have Note that in order to get this asymptotic expression for the central charge we have used (3.22). Moreover, since q 0 is the slowest decreasing charge in Q the leading growth of (3.22) agrees with the growth of q 0 up to a finite prefactor.
We are now ready to determine the vectors z a defined in (4.3). Since our instantons are labeled by the charge vector Q we thus need to determine z(Q). Inserting (4.12), (4.15) and (4.16) into the vectors (4.3) we find where as before I is split into , α .
In the following we will simplify our discussion by no longer indicating the block structure and hence suppress the indices α , β and the finite asymptotic axion decay constantsf α ,β . This can be done since the asymptotic behavior is entirely captured by the y i -dependent growth factors and we will keep this relevant information for evaluating the axion WGC constraint. More precisely, we will consider the vectors when discussing the axion WGC introduced at the beginning of this section. Since we are only concerned with parametric control, we thus ask, if the z I (Q) is bounded from below in the directions with parametrically growing axion decay constants. Recall that we will be only discussing directions where this happens path-independently in a growth sector. This implies that we consider z(Q) = z I (Q)γ I ∈ V heavy . (4.20) To make the growth of (4.19) fully explicit we next use (4.17) and q 0 ∈ V r in (4.19) to determine where we have evaluated the growth of q 0 using (2.32). Therefore, in order to show that the axion WGC is not parametrically violated, it will often suffice to argue that the y i -dependent pre-factorẑ (q 0 ) in (4.21) is bounded from below. However, in order to capture all cases it turns out to be important to also consider the growth of the tower encoded by q (Q). In fact, in the next subsections we will argue that one can use (4.11) together with the growth of the tower discussed in section 3.2 to always identify appropriate combinations of Type G and Type F charges specifying Q given in (3.13) such that the convex hull condition for the multi-axion WGC is never parametrically violated.
Axion WGC in one-parameter infinite distance limits
To illustrate how to construct the D2-brane instantons relevant for the axion WGC, let us first look at an example before moving to more general settings. Concretely, we first consider a one-parameter Type IV d infinite distance limit. As detailed in appendix A.1 we can decompose V light , V heavy and V rest into the vector spaces V as listed in table 4.1. We can reformulate the charge vectors found in [15] as with the representatives for the Type G charges given by Note that the sum over Type F charges for Q F indeed excludes the two basis vectors q 0 , q 1 , and only sums over the remaining d − 1 basis vectors for V 2 . In the following we will discuss how these instantons Q allow us to examine the WGC for axion directions in each of the subspaces of V heavy via the vectors z(Q).
First consider an axion direction v 4 ∈ V 4 for z(Q). Its growth in this component z 4 (Q) can be deduced from (4.21), and using that q 0 ∈ V 2 we find that all factors of y 1 cancel each other, such thatẑ 4 (q 0 ) remains finite along the infinite distance limit. Then to provide evidence for the axion WGC for this direction, the only remaining thing to show is that we can ensure that one of our D2-brane instantons has non-zero charge with respect to this axion, that is, q 4 (Q) = Q, v 4 = 0. This axion direction can couple to either the Type G charge vector q 0 or some Type F charge vector v i 2 of V 2 under the product ·, · , which can be seen from its orthogonality properties (2.25). If it couples to q 0 then every instanton in our tower (4.22) suffices to argue that the axion WGC is not violated parametrically in this direction, whereas if it couples to some v i 2 we must pick a non-zero Type F charge m i 2 = 0 to ensure a non-zero charge with respect to this axion. Now consider the direction v 6 ∈ V 6 . The orthogonality properties (2.25) of ·, · tell us that this axion direction couples to q 1 , since V 6 and V 0 are dual to each other and both vector spaces are one-dimensional. This axion therefore couples to one of the D2-brane instantons (4.22) provided that m 1 = 0. However, at first sight this axion direction seems to violate the WGC for axions, because we find by use of (4.21) that the growth of z 6 (Q) is given byẑ 6 (q 0 ) = (y 1 ) −1 , since q 0 ∈ V 2 . This suggests that the convex hull cannot envelop a ball of finite size in this direction, and that instead the convex hull seems to shrink along this direction as we move further along the infinite distance limit. We can resolve this issue by using that we have a tower of instantons. Namely, if we consider a D2-brane instanton with m 1 = m 1 crit , with m 1 crit ∼ y 1 the growth of our tower discussed in section 3.2, we find that its charge grows as Then combined with the growth specified by (4.21), we find that z 6 (Q) ∼ẑ 6 (q 0 ) q 6 (Q) ∼ (y 1 ) −1 (y 1 ) ∼ 1 (4.25) which thus provides us with a vector z(Q) whose component in the direction v 6 is bounded in size from below, what suffices to argue that the axion WGC is not violated parametrically in this direction either.
In summary we have thus found, for every direction in V heavy , a vector z(Q) by picking an appropriate instanton Q out of (4.22), such that its component in that direction is bounded in size from below along this infinite distance limit. Therefore we found a set of vectors z(Q) that span a convex hull which will always contain a certain ball of finite size. From a physics perspective, this means that we have found that the axion WGC cannot be violated parametrically by considering axion decay constants that grow path-independently in this example, since we have showed that there always exists an appropriate instanton with decreasing action that couples to such axions. The fact that our tower has a critical size m crit which increases parametrically was crucial for arguing that the WGC for axions cannot be violated parametrically. We will motivate this feature more generally in the next subsection, where it falls under case (2), and a demonstrative figure is also provided in Figure 2.
From the discussion in 2.3 we know that the remaining one-parameter infinite distance limits are Type II and III limits, which turn out to be slightly less interesting than the Type IV infinite distance limit. Namely, the Type G charge vectors q 0 , q 1 that generate the states are located in the same eigenspace V 1 for a Type III limit, or V 2 for a Type II limit, as can be inferred from table 3.1. Furthermore, the Hodge norm on these eigenspaces V 1 and V 2 possesses the largest decrease out of all eigenspaces for the corresponding limits by use of the growth properties (2.32), since these eigenspaces have the lowest index. 15 And since the decrease of the instanton action is determined by q 0 via (4.17), we know that the growth of any axion decay constant will be matched or even exceeded by the decrease of the instanton action, using the duality between V heavy , where the axion directions reside, and V light , where the instanton charges reside. This tells us that none of the axion directions can violate the axion WGC parametrically in these examples either. Note in particular that we do not need the parametrical growth of m crit to fullfill the WGC for axion directions that couple to q 1 , because q 1 lies in the same eigenspace as q 0 , whereas it played a crucial role for the Type IV infinite distance limit.
Strategy for general infinite distance limits
Here we argue that the axion WGC cannot be violated parametrically by the R-R axions under consideration for general infinite distance limits. In doing so, we will only need to use the reformulated expression for the charge vector (3.13) of the D2-brane instantons, together with some requirements on the growth rate of the size of the tower m crit that we argued for in section 3.2. We can strategically analyze the axion directions v ∈ V heavy by breaking them down to the following four cases, where we have that: (1) q (Q G (m I )) = 0 for some m I , andẑ (q 0 ) is bounded from below; (2) q (Q G (m I )) = 0 for some m I , andẑ (q 0 ) unbounded from below; (3) q (Q F (m s )) = 0 for some m s , andẑ (q 0 ) is bounded from below; (4) q (Q F (m s )) = 0 for some m s , andẑ (q 0 ) unbounded from below.
The purpose of this separation of cases is to investigate how every axion direction v ∈ V heavy can be coupled to one of the D2-brane instantons Q specified by (3.13) such that, provided we pick the right charges, the growth of this component z (Q(q 0 |m I , m s )) can be bounded from below. Namely, by showing that we can pick a vector z(Q) for every direction in V heavy such that its component in that direction is bounded in size from below, we know that the associated convex hull must contain a ball of finite size. The first condition then indicates whether this axion couples to our tower of D2-brane instantons via one of their Type G charges or one of their Type F charges. This coupling can be ensured via the canonical duality (4.11) between V heavy and V light , since the Type F and Type G charge vectors of our D2-brane instantons together span the whole of V light by construction (see section 3.1), such that we only need to pick the right combination of instanton charges. Therefore we are left with analyzing the growth rate of the vectors z(Q(q 0 , m I , m s ) in that direction, which is given by z (Q(q 0 |m I , m s ) in (4.21). Then wether the growth ofẑ (q 0 ) is bounded from below tells us directly if this component remains finite for all paths within the growth sector (2.9). However, the growth ofẑ (q 0 ) is not necessarily bounded from below, such that there can be directions where the convex hull seems to shrink along the infinite distance limit. The fact that we still have to account for the possible parametrical growth of charge q (Q) in (4.21) will resolve this issue for cases (2) and (4). Below we examine these cases separately in more detail, such that we can argue that each of these cases does not violate the axion WGC parametrically.
Case (1):
We have that q (Q G (m I )) = Q G , v = 0 for some m I , so the axion direction couples to a Type G charge of our charge vector. Additionally, we know that the growth of z (q 0 ) in (4.21) is bounded from below, such that the convex hull can envelop a ball of finite size in this direction, which indicates that these axions can therefore never violate the axion WGC parametrically. For example, the axion direction N − 1ã 0 ∈ V 4 in the one-parameter Type IV limit belonged to this case.
Case (2): This case seems to violate the axion WGC at first sight, sinceẑ (q 0 ) cannot be bounded from below, that is,ẑ (q 0 ) → 0 for certain paths within the growth sector (2.9), which suggests that the convex hull can shrink in this direction as we move along the infinite distance limit, similar to the axion directionã 0 ∈ V 6 we encountered in the one-parameter Type IV in section 4.3. However, we did not account for the charge of this instanton yet, which is given by 16 (4.26) We know that this axion must couple to a Type G charge, so we must have that q J , v = 0 for some J, and thus q J ∈ V 6− by use of (2.25). Then increasing this m J results in increasing the charge q (Q), such that if we pick m J = m J crit ∼ q 0 / q J following (3.25), we find where we used that the growths of the norms of q 0 and q J cancel the growth rate in front precisely, using that q 0 ∈ V r and q J ∈ V 6− . Thus we found that the tower of D2-brane instantons grows at exactly the right rate to avoid parametrical violations of the WGC for these axion directions, since it allowed us to pick appropriate instanton charges such that the component of z(Q) in this direction is bounded in size from below, which indicates that the convex hull can envelop a ball of finite size in this direction. Note that this crucially relies on the growth of the instanton charge q (Q), which means that one cannot use only the smallest charge with respect to these axions. A depiction of this interplay between the growth of the tower and the convex hull condition of the axion WGC has been provided in figure 2.
Case (3): By picking the right Type F charges m s we can ensure that such an axion direction couples to a D2-brane instanton Q out of our tower (3.13), and the fact thatẑ (q 0 ) is bounded from below then indicates that the convex hull can envelop a ball of finite size in this direction, which tells us that these axions cannot violate the WGC. As an example, in the one-parameter Type IV limit in 4.3 an axion direction v 4 ∈ V 4 that does not couple to the Type G charge vector q 0 falls under this case.
Case (4):
We know that v must couple to one of the Type F charge vectors via with the relevant Type F charge vectors given by v i 6− ∈ V 6− ⊂ V light . However, we cannot boundẑ (q 0 ) from below, such that we can haveẑ (q 0 ) → 0 for certain paths to the infinite distance limit. It indicates that the convex hull shrinks along these directions, and to avoid this we must pick m i 6− ∼ q 0 v , such that we do not have parametrical violations of the axion WGC, similar to case (2). In section 3.2 we argued that the upper bound m crit for Type F charges increases at a sufficient rate to allow for such choices of Type F charge. We should note that this case did not occur in the one-parameter examples, but can occur in multi-parameter limits.
Axion WGC in two-parameter infinite distance limits
In this section we provide some examples to demonstrate the strategy outlined above. We go through all two-parameter infinite distance limits t 1 , t 2 → i∞ and construct the relevant charge vectors. It should be stressed that in contrast to [23] we will not require the charge vector Q to be in orbit-form. This allows us to address all possible enhancements and show a general result.
In the case of considering a limit t 1 , t 2 → i∞ one has two log-monodromy matrices N 1 , N 2 and hence one has two singularity types (2.16) associated to N 1 and N (2) = N 1 + N 2 according to our discussion in section 2.3. Denoting these types by Type A and Type B, the two-parameter configurations are split into all possible enhancements Type A → Type B. Going through all relevant cases we will determine the split (4.9) with V light and V heavy defined in (3.3) and (4.8).
The various components are given as direct sums in the vector spaces V ≡ V 1 2 . By use of appendix A.2 we first give the dimensions of the V 1 2 and indicate the positions of the basis of Type G representatives q 0 , q I in V light , such that we can write the charge vectors in the formulation introduced in section 3.1. Then we consider each axion direction v 1 2 that lies in a subspace V 1 2 of V heavy and identify the candidate D2-brane instanton that ensures that the axion WGC is not violated parametrically. In doing so, we employ the orthogonality properties (2.25) of the product ·, · and the growth properties (2.32) for the sl(2, C)-eigenspaces V 1 2 . Furthermore, we point out for each of these axion directions v 1 2 to which cases it belongs in the strategy outlined in the previous section. In particular, wether v 1 2 q 0 can grow unboundedly indicates directly, by use of (4.21), if the growth ofẑ 1 2 (Q) is bounded from below, or if it is unbounded and we need to consider growth of instanton charges as well.
Enhancement I a → IV d
Let us first consider the enhancement from type I a to IV d , which can occur for d = r + a with r ≥ 1. Then H 3 (Y 3 , R) decomposes as in table 4.2. The charge vectors found in [23] can be reformulated as with the basis of representatives for the Type G charges given by Now let us go systematically through axion directions for each of the subspaces of V heavy : • The axion direction v 44 ∈ V 44 must couple to some Type F charge vector v j 22 . For certain paths we can have that v 44 q 0 can grow unboundedly, which means that we must require m i 22 ∼ q 0 v 44 such that we do not violate the WGC for axions parametrically. These axion directions belong to case (4).
• The axion direction v 36 ∈ V 36 couples to the Type G charge vector q 1 , thus we must pick m 1 = 0 such that it couples to the D2-brane instanton. Furthermore we have that v 36 q 0 grows unboundedly, which indicates that we must pick m 1 = m 1 crit ∼ q 0 / q 1 to avoid parametrical violations of the axion WGC. This axion direction belongs therefore to case (2).
• The axion direction v 34 ∈ V 34 can couple to either the Type G charge vector q 0 or some other Type F charge vector v i 32 . For both cases we have that the growth of q 0 v 34 is bounded, so we cannot have a parametrical violation of the axion WGC. If it couples to q 0 this axion belongs to case (1) The charge vectors found in [23] can be reformulated as with the basis of representatives for the Type G charges given by Let us go through each of the subspaces of V heavy systematically: • For v 46 ∈ V 46 we know that it couples to the Type G charge vector q 1 from orthogonality conditions, since V 20 , V 46 are dual one-dimensional vector spaces. We also know that v 46 q 0 can grow unboundedly, and that we therefore must pick m 1 = m 1 crit ∼ v 46 q 0 . Thus this axion direction belongs to case (2).
• For v 44 ∈ V 44 we can have that it couples either to the Type G charge vector q 0 or to some other Type F charge vector v j 22 . Either way the growth rates of q 0 and v 44 cancel each other. In the first case we have that it couples to Type G charge and thus belongs to (1), and in the other case it couples to Type F charge and thus belongs to (3).
• For v 34 ∈ V 34 we know that it couples to some Type F charge vector v i 32 , and that the growth rates of v 34 and q 0 cancel each other. Therefore these axion directions belong to case (3).
Enhancement III c → IV d
The enhancement from type III c to IV d can occur if we have d = r + c + 2 and r ≥ 0. The vector spaces can then be decomposed as listed in table 4.4.
The charge vectors can be given by with the basis of representatives for the Type G charges given by Note that these charge vectors Q G differ from the ones found in [23]. There they generated the tower by acting with e N − 2 on q 1 , such that only q 1 , q 2 were used to span the charges of the states. A non-zero component for Q in the direction q 0 will be necessary to couple certain axion directions in V 34 to the tower of D2-brane instantons. Now let us go through each of the subspaces of V heavy : • For v 56 ∈ V 56 we know that the axion couples to the Type G charge q 2 . We can increase m 2 = m 2 crit ∼ q 0 v 56 , such that we cancel the growth of q 0 v 56 . This axion direction belongs therefore to case (2).
• For v 54 ∈ V 54 we know that it couples to the Type G charge q 1 from orthogonality conditions. Again we have that q 0 v 54 can grow unboundedly, thus we must increase m 1 = m 1 crit ∼ q 0 v 54 to cancel this growth. Therefore this axion direction belongs to case (2) as well.
• For v 44 ∈ V 44 we have that it couples to some Type F charge v j 22 . The growth rate of v 44 can exceed the growth of q 0 . We must therefore increase m j 22 ∼ v 44 q 0 . This tells us that these axion directions belong to case (4).
• For v 34 ∈ V 34 we can have that it either couples to the Type G charge q 0 or to some other Type F charge v i 32 . For both cases we have that v 34 q 0 is bounded. If it couples to q 0 it belongs to case (1), whereas if it couples to some v i 32 we must pick m i 32 = 0 and it belongs to case (3).
Enhancement II b → III c
The enhancement Type II b to Type III c can occur if we have c = b + r − 2 and r ≥ 0. The vector spaces are then decomposed as in table 4.5. Here we consider charge vectors different from [23], given by 35) and the basis for the representatives for the Type G charges can be given by Then let us go through each of the subspaces of V heavy for the axion directions: • The axion direction v 45 ∈ V 45 can couple to Type G charge vectors q 0 and q 1 . In both cases the growth of v 45 is matched by the decrease of q 0 . These axion directions therefore belong to case (1).
• The axion direction v 44 ∈ V 44 couples to some Type F charge v j 22 . The growth rate of v 44 can never exceed the decrease of q 0 , and thus we can just pick m j 22 = 0. Therefore these axion directions belong to case (3).
• The axion direction v 34 ∈ V 34 couples to a Type F charge vector v i 32 . We have that the growth of v 34 never exceeds the decrease of q 0 . Thus we pick m i 32 = 0 and therefore such axion directions belong to case (3).
Conclusions
In this paper we studied the axion Weak Gravity Conjecture for asymptotic regimes in field space that are at infinite geodesic distance. Specifically we focused on the axions arising from the R-R three-forms in Type IIA string theory compactified on a Calabi-Yau threefold. The kinetic terms of these axions depend non-trivially on the complex structure moduli of the threefold, but we showed the this dependence can be made explicit in asymptotic regimes that are at infinite geodesic distance. The infinite distance points in general Calabi-Yau threefold moduli spaces that are obtained by sending any number of coordinates to a limit can be classified [23,49]. We have shown that the data characterizing a limit can also be used to group the axions into subsets, with each subset having a kinetic term with a common growth behaviour. We then focused on the axions that have growing kinetic terms for any path, in a growth sector of the form (2.9), and hence growing axion decay constants. In order that these do not violate the axion Weak Gravity Conjecture, instantons have to become relevant with actions decreasing with the inverse rate when approaching the infinite distance point. By using recent insights about the SDC [15,23], we have argued that one can always find such instantons, since an infinite number of candidate D2-brane states has vanishing action at the infinite distance point.
In order to address the axion WGC for multiple axions, we have constructed a set of vectors z(Q) that depend on the axion decay constants, the instanton action, and instanton charge.
Here it was crucial to introduce appropriate charge vectors Q(q 0 |m) in (3.13) such that these instantons actually correct the effective theory. The convex hull of such vectors should contain the unit ball, in order that the axion WGC is satisfied. We stress that this statement does not have to be true for the smallest charge coupling the instanton to the axion. Indeed we have shown that there are many infinite distance limits in moduli space, namely the limits that contain type IV enhancements, for which the convex hull of the z(Q) cannot contain a unit ball if one considers the smallest instanton charge. The emerging picture is, however, compelling: the closer we approach such infinite distance points the higher the instanton charge of the instanton relevant in (4.3) has to be. We have depicted this result in figure 2. This implies that actually an ever increasing tower of instanton states becomes relevant when approaching the infinite distance point. Clearly, such a picture is reminiscent of the SDC where an increasing tower of particles needs to be included in the effective theory. It is interesting that our findings can also be viewed as providing evidence for the strong axion WGC, if the charge vectors Q(q 0 |m) are indeed describing the lightest stable states relevant for the SDC. The instanton actions for all charges Q(q 0 |m) have the same leading growth determined by q 0 and are thus equally relevant in the effective theory. It would be very interesting to explore this further and, in particular, clarify the role of individual Type G and Type F states that are not of the form Q(q 0 |m). · · · · · · m crit = 9 approach limit point Figure 2: Depiction of the convex hull spanned by the vectors ±z(Q(0)) and ±z(Q(m crit )), which correspond to the lowest and highest instanton in our tower respectively. Note that we explicitly depicted the steps that we go through as we move up in this tower of instantons. As we move further along the infinite distance limit, we find that these steps become smaller and smaller, which is compensated by the fact that our tower of instantons is becoming larger, such that the total length of this side of the convex hull remains finite and thus that the convex hull will always contain a ball of finite size.
In our analysis of the D2-brane instantons it was crucial to collect information about D-brane states with asymptotically vanishing actions. We started our considerations by asserting that these states are BPS and their action can be determined by evaluating the central charge. Furthermore, at least for a certain large class of possible limits, it was essential to argue how the number of stable states changes when approaching the infinite distance point. In order to do that we generalized the stability argument of [15] to multiple variables. More precisely, we derived a maximal growth of the tower of states that are stable when approaching the infinite distance point by ensuring the absence of decays of these states. Our stability arguments apply to situations in which one can identify a charge vector q 0 that has a distinguished slowest decrease of the associated central charge. Such an identification was also important in the construction of the charge orbits of [23] to which our results naturally apply. It is important to stress, however, that even if the instanton charges do not take the form of a charge orbit our findings suffice to provide general evidence that the axion WGC is not parametrically violated. It should be clear that our findings cannot be conclusive when it comes to checking the BPS properties of D-brane states. It would be interesting to explore these issues further and, in particular, study stability at limits in moduli space that do not allow for a local construction of a charge orbits. A potential avenue was suggested in [23], and exemplified in [27], in which charge orbits were transferred along the moduli space.
Our results were obtained in full generality for any infinite distance limit in complex structure moduli space and hence do not apply to only a specific example or a class of examples. This was achieved by using the powerful mathematical machinery of [66,67], which describes so-called limiting mixed Hodge structures. One of the central results of these papers is the introduction of n commuting sl(2, C)-algebras associated to an infinite distance locus obtained by sending n coordinates to a limit. These algebras act on the vector space H 3 (Y 3 , C) and induce a canonical splitting of H 3 (Y 3 , R), which we argued to be crucial in studying the axion kinetic terms. The sl(2, C)-data arises from log-monodromy matrices N i , the limiting period vector a 0 , and the growth sector (2.9) associated to the path along which one takes the limit. We believe that this approach will be fruitful in many further applications [84]. However, it should be stressed that it is particularly powerful when it comes to estimates, such as the ones encountered for the axion WGC. This can be traced back to the fact that the asymptotic behaviour of periods Π can be bounded by using the sl(2, C)-structure, but the corrections are only under parametric control. It would be very interesting to systematically classify the corrections arising in the link of Π to the sl(2, C)-structure.
This leaves us to close with highlighting further interesting open problems for future projects. A first direction is to address the generalization of our considerations to any path in complex structure moduli space and hypermultiplet moduli space. On the one hand, this would require to go beyond the growth sector description presented here. On the other hand, it would also amount to consider paths in which one sends the four-dimensional dilaton e D to a limit. Satisfying the axion WGC then requires to consider more general D-brane configurations as very recently also discussed in [31][32][33]. We stress that it is an interesting and challenging task to unify the mathematical structure presented here, with the general insights about the hypermultiplet moduli space [34,35]. A second open question is to address the issue of emergence in hypermultiplet moduli space. More precisely, it was suggested in [15], that infinite distances in moduli space could be emergent from integrating out the infinite tower of states relevant to the SDC. Furthermore, it was very recently argued in [31] that in certain situations the inclusion of D-instanton corrections into the moduli space metric can render formerly infinite distance points to lie at finite distance. It would be interesting to check if this is indeed true for all infinite distance limits investigated here. Finally, let us close with the rather obvious statement that we did not check the precise numerical constraint suggested by the axion WGC. While the introduced mathematical machinery gives bounds on the relevant quantities, the appearing numerical coefficients are not further constraint. While they can be derived in explicit examples, it would be very exciting to check if there are general constraints arising from geometry.
Acknowledgements
It is a pleasure to thank Chongchuo Li, Eran Palti, Irene Valenzuela, and Stefan Vandoren for valuable discussions.
A Derivation of eigenspaces for infinite distance limits
In this appendix we decompose the eigenspaces V in the primitive subspaces P p,q (N (k) ) for oneand two-parameter infinite distance limits, including their dimensions. We make the location of the vectors that follow fromã 0 ∈ P 3,dn (N − (n) ) explicit, 17 because of their importance in constructing Type G charge vectors in sections 3 and 4. These results can be argued from the Hodge-Deligne diamonds in each step of the enhancement chain, which were given in [23]. We explain this procedure for the one-and two-parameter infinite distance limits separately. Note that the V given in this appendix are the complexifications of the vector spaces used in the main text.
A.1 Eigenspaces V for one-parameter infinite distance limits
Here we give the decomposition and the dimension of the eigenspaces V for all types of oneparameter infinite distance limits considered in section 3. The content of these spaces can be read off from the rows of the Hodge-Deligne diamond which were given in [15,23], with the index of the row indicating the subscript . This follows from the fact that elements of the same row have the same eigenvalue under the generator Y 1 of the sl(2, C)-triple.
A.1.2 Type II b
For a Type II b infinite distance limit the eigenspaces V and their dimensions are given by Note thatã 0 ∈ P 3,1 (N − 1 ) ⊆ P 4 (N − 1 ).
A.1.3 Type III c
For a Type III c infinite distance limit the eigenspaces V and their dimensions are given by Note thatã 0 ∈ P 3,2 (N − 1 ) ⊆ P 5 (N − 1 ).
A.1.4 Type IV d
For a Type IV d infinite distance limit the eigenspaces V and their dimensions are given by Note thatã 0 ∈ P 3,3 (N − 1 ) ⊆ P 6 (N − 1 ).
A.2 Eigenspaces V 1 2 for two-parameter infinite distance limits
Here we give the spaces V 1 2 for the two-parameter infinite distance limits considered in section 4.5. Let us first shortly explain the procedure used to derive these spaces V 1 2 , which uses the Hodge-Deligne diamonds considered in [23]. We start with the Hodge-Deligne diamond induced by N − 1 . This diamond can be split up into components (N − 1 ) a P b (N − 1 ), with P b (N − 1 ) the primitive vector space of weight b associated with N − 1 . The row to which this component belongs determines the number 1 = b − 2a, similar to the one-parameter infinite distance limits. Then N − 2 induces a mixed Hodge structure on these components (N − 1 ) a P b (N − 1 ) individually. In practice, this means that (N − 1 ) a P b (N − 1 ) is split up into further pieces, and each piece belongs to a specific point in the Hodge-Deligne diamond induced by N − (2) . Then we can determine 2 from the row at which this piece ends up in the Hodge-Deligne diamond induced by N − (2) .
A.2.2 Enhancement II b → IV d
Here we consider the enhancement of a Type II b infinite distance limit to a Type IV d infinite distance limit. The decompositions and dimensions of the eigenspaces V 1 2 are then given by Note thatã 0 ∈ P 3,3 (N − (2) ).
A.2.3 Enhancement III c → IV d
Here we consider the enhancement of a Type III c infinite distance limit to a Type IV d infinite distance limit. The decompositions and dimensions of the eigenspaces V 1 2 are then given by Note thatã 0 ∈ P 3,3 (N − (2) ).
A.2.4 Enhancement II b → III c
Here we consider the enhancement of a Type II b infinite distance limit to a Type III c infinite distance limit. The decompositions and dimensions of the eigenspaces V 1 2 are then given by Note thatã 0 ∈ P 3,2 (N − (2) ). | 23,084.6 | 2019-05-02T00:00:00.000 | [
"Mathematics"
] |
Power Optimization Techniques for N OC
— The on-chip network has become a significant solution for the communication limitation of SoC (System-on-chip). The demand for relative increment in bandwidth to facilitate high core utilization and the need for low power consumption as well as higher performance has increased. The major circuitry is the router in NoC, which barely affects on power dissipation, latency, and performance. The dynamic power consumption is one of the major components of total power consumption. This paper presents a detailed structure and verification of the router module and various power optimization techniques for NOC by restructuring the architecture. The design of the router is coded in Verilog, synthesized, and simulated in the Xilinx ISE Design Suite 19.1 tool.
INTRODUCTION
As technology is reaching a new height, memories and processors are becoming faster, smaller, cheaper, and energy efficient. This helps computer architects to incorporate a greater number of specifications in a single chip [1]. As Moore's Law is moving towards its limit, the intimation of simple cores is getting used to continue improving performance while reducing fabrication costs. Consequently, not only the computing power and memory access but also in inter-core communication are the bottleneck for performance.
As the Semiconductor industry is advancing towards a complex system on chip (SoC) design containing hundreds of IPs, the traditional bus architecture as shown in the figure.1, used for communication between the multiple cores. Bus architecture introduces a lot of latency [3] and area overhead in the design. As technology is scaling to a lower node, the need for high performance, higher efficiency, and low power design become the bottleneck. To satisfy these criteria of SOC, the bus-based communication lost its significance and there is a need for a better solution. As the technology is scaling down, the need for high performance, higher efficiency and low power design becomes the bottle neck. To satisfy these criteria of SOC, the bus-based communication lost its significance and there is a need for better solution.
The interconnected networks have become evident to replace traditional buses as the prevailing solution to provide fast, low cost, and scalable communications. They are the solution for successful future digital systems, both chip multiprocessors consist of identical cores and heterogeneous systems-on-chip. From the last ten years, there has been an in-depth effort towards optimizing the networks-on-chip (NoCs) from low-level physical aspects up to system-level and application-related problems. NoCs have currently reached a mature level of development with their integration as a fundamental component.
The Network on chip architecture differs based on the technology and application-specific. There are many topologies for NoC [2] like Chip level integration of communicating heterogeneous elements (CLICHE), Butterfly Fat Tree (BFT), Scalable programable integrated network (SPIN) and Mesh ( Figure.2). Every topology has its own advantages and disadvantages. The mesh topology has the following advantages over other topologies.
a. Each node is connected to 4 neighbours, except for the ones on the boundary. b. Easy to layout on-chip: regular and equal length links. c. Path diversity: Many ways to get from one node to another.
A. Flow Control
Flow control is the method through which an upstream router will know about the availability of buffer in the downstream router. The router sending the data packets is called an upstream router, the router receiving the data packets is called a downstream router. This is done by exchanging credits between the two routers. If the buffer is available, a packet can be forwarded to the next router after the acknowledgment signal from the downstream router to the upstream router.
B. Wramhole flow control
Data packets are divided into smaller units called flits. The data packet consists of three flits, head, body, and tail. Flits are sent across the fabric in wormhole fashion. Body flit follows the head flit, tail follows the body flit and it happens in a pipelined manner. Figure 3 represents the wormhole flow control mechanism. If the head flit is blocked then the rest of the packet is stopped, since the information about the destination (Routing information) is given only to the head flit. This type of flow control mechanism has lower latency and since we are not reserving buffers for the entire packet, it has efficient buffer utilization. In this type of flow control, the packet can occupy resources across multiple routers. This wormhole routing has one problem called head of line blocking. If the head flit cannot move due to contention, another worm (flit) cannot proceed even though links are may be idle.
C. Virtual Channel Flow Control
As discussed above, to avoid the head of line blocking, the Virtual Channel Flow Control mechanism has been proposed. The one physical channel has been multiplexed with multiple virtual channels. Single FIFO buffer is replaced with multiple buffers. A single physical channel is terminating at multiple buffers. These buffers are known as virtual channels. This concept is known as Virtual Channel Flow Control. Virtual Channels (VC) number is allocated once at each router to the head flit and the rest of the remaining flits of the packet inherit the same VC number. Now the flits of different packets can be interleaved on the same physical channel.
D. Router Microarchitecture
As shown in figure 4, the routers consist of Input ports. There are five input ports North, East, West, South, and Local. Each port has virtual channels (depending on the design). The crossbar connects the input ports to the output. The control logic facilitates the smooth flow of packets from the input side to the output side. The first function is the buffering of flits. Whenever a flit is coming to the channel, it occupies the buffer. The next task is route computation. For a flit that is residing inside a buffer, the route computation unit is going to assign the output port. The process of finding an output port for a packet residing inside a buffer is called route computing. Route computation is done for the head flit. The body and tail flit inherit the route assigned to the head flit. The next task is the virtual channel allocation. It is based upon handshaking between adjacent routers.
E. Router Algorythm
The process of reserving buffer in the downstream router is called virtual channel allocation. The next task is the switch allocation. Whenever multiple flits require the same output port, the switch allocator chooses the flit. Once the flit has been chosen, the next task is to switch traversal. At most 5 flits can be traversed at any given clock cycle. These are the 5 logical cycles that happen inside the router.
III. IMPLEMENTATION OF POWER TECHNIQUES IN NOC
As technology is scaling down to deep Nanometer, gradually scales down threshold voltage which contributed enormously towards an increase in subthreshold current leakage, therefore, making high power dissipation. In Multicore chips, transmission requires more bandwidth and speed in between IPs, due to an increase in more feature requirement. So NoC is one of the solutions which allows higher transmission speed in between core IPs, facilitates the latest technological trends like AI, Machine learning, etc. The development of NoC provides higher throughput, lower latency, and higher bandwidth. However, NoC suffers from power consumption caused by Leakage power and switching activity in multi-core circuitry.
To overcome this problem, the usage of Power gating in NoC will help. Buffers used as storage module for flits, which provides higher bandwidth and helps to avoid deadlock. In some recent research specifies [7] that buffer-less NoC helps in reducing power and area, but has trade-off for deadlock, low latency, and live lock. In the recent invention, power dissipation can be reduced with buffers included, by making some modification and adding circuitry in Integrated circuits.
To reduce power dissipation in circuits, Power optimization techniques like Power gating, Clock gating, as well as multi vt (threshold voltage) and multi Vdd (Supply voltage), etc are suitable. Figure .5
IV. PROPOSED WORK
The present work determines the reduction of power consumption in core architecture as well as an increase in signal transmission-speed by implementing NoC. The proposed work contains several routers connected in mesh architecture and verified for its functionality. Each router is connected to different IP's, here IP's represents the processor, RAM, ROM, cache memory, sensor unit, DSP, Display Processor, WI-FI, GPU, etc. For the betterment of device performance, the idle routers to be turned "off" or deactivate to save power. It is explained in the below section.
A. Applying Power Gating to routers
As mentioned in figure 8, while the data flits are transmitting from upstream router (transmitting router) to downstream router (receiving router), routers in this path will be inactive state by turning "off" PG. The destination router address is located at head flit of data, depending on the flit transmission path respective routers on that path will activate and few neighboring routers also turned "on" to avoid deadlock and latency problem. data to R33 (router33) through routers R10, R20, R21, R22, R23, R33. The path allocated by route computation base on low latency and to avoid the deadlock. If already a few paths are transporting data flits, the deactivated routers made "on" if any new data flit arrives. Eventually, this methodology saves power consumption exceptionally but keeping the routers continuously in an active state consumes power. The activation of PG is controlled by VC and data flits. After receiving all flits, if there is no further data transmission for a few cycles, then routers will go in sleep mode. A quick incoming flit during router is in the sleep state can lose the data packet or can cause latency in the transmission of data because turning on the power gating at the same cycle time will increase the latency. To avoid this problem, there is a methodology that generates an early notification for the routers.
B. Early Notification to PG
To avoid the problem of missing data due to the router off (sleep state), the proposed technique works in order to turn on (active state) the router before the flit is reached. The upcoming router in a path will get notified 5 cycles earlier to wake up to capture data. If the upstream router starts transmitting data to the downstream router, the downstream router gets notified to be an inactive state. To overcome the drawback of keeping unused routers on, we propose a method to overcome the drawbacks of keeping unused routers ON. The routers are divided into Different slots like Primary routers and Secondary routers. Microprocessor core has few main regular IPs like CPU, memory, Execution units which do not need to be turned off for regular iterations. Other units like GPU, DSP, Sensor processing can be turned OFF. Major power loss happens by supplying power to units that are non-functional. So, the main routers will act as primary routers and remaining can be slotted as secondary routers. Primary routers are continuously in an active state and secondary routers in the sleep state. If a router fails to conduct completely, then one of the secondary routers acts as a primary router and provides complete functionality. We also propose a memory storage table, so it can be easy to router identify which neighboring router is in ON state and which router has failed in its functionality. The table has a router transmission path memory status from the upstream router to the downstream router.
D. Route path Memory table
The memory table stores the information about the shortest path information of the source router to the destination router. For on-chip Networks in the core, there are possibilities of router failures, in that case, memory table stores the failed router information and avoids other routers to consider the failed outer path. The memory table will store the shortest flit transmission path to avoid latency and improve efficiency. In case of a router failure, the automatically upstream router finds a new path to reach the downstream router, the updated path will be stored in the memory table. The comparison between the without power gating model and with power gating model, of different size Mesh topologies with and without power gating for dynamic and static. By the experimental results, a number of routers increase the buffer count in each router increases, a high number of buffers consume more power. Routers are kept in an active state for a period of cycle time during packet transmission between them. once transmission completes routers wait for 5-6 cycles ideally, then go into sleep mode. The figure 10 is a graphical representation of total power consumed by the routers. The graph shows the difference between without power gating and with applying power gating at routers, for deterministic input path. Here an increase in router count increases power dissipation, measured in uW (microwatts). But comparative applied methodology helps in the reduction of power.
VI. CONCLUSION The high performance and less power consuming NoC for multicore Integrated circuits have become important. We did a detailed study of NoC blocks, like router VC allocator, input buffer, route computation, Switch arbiter. Router with 2*2, 2*3, 4*4, 8*8 mesh topologies has been implemented in this paper, the total power consumption in routers without power gating and the power reduction in addition to power gating. Buffer is a major power-consuming part of the router. Power optimization techniques improve the efficiency of the circuit and reduce the power consumption. Also slotting the routers as a primary and secondary use for high-end applications that needs continuity of power with turning "off" them, in case of a router failure, the system considers alternative router by excluding the old router from path memory list. It contributes to avoiding the deadlock problem and reduces latency in interconnects. | 3,187.8 | 2020-07-09T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Improved Nelder-Mead Optimization Method in Learning Phase of Artificial Neural Networks
Artificial neural networks method is the most important/preferred classification algorithm in machine learning area. The weights on the nets in artificial neural directly affect the classification accuracy of artificial neural networks. Therefore, finding optimum values of these weights is a difficult optimization problem. In this study, the Nelder-Mead optimization method has been improved and used for training of artificial neural networks. The optimum weights of artificial neural networks are determined in the training stage. The performance of the proposed improved Nelder-Mead-Artificial neural networks classification algorithm has been tested on the most common datasets from the UCI machine learning repository. The classification results obtained from the proposed improved Nelder-MeadArtificial neural networks classification algorithm are compared with the results of the standard Nelder-Mead-Artificial neural networks classification algorithm. As a result of this comparison, the proposed improved Nelder-Mead-Artificial neural networks classification algorithm has given best results in all datasets.
Introduction
Artificial neural networks (ANNs) is a vital part of artificial intelligence.Machine learning and cognitive sciences depend on ANNs to solve various nonlinear mappings relationships [1].ANN's are normally used a back-propagation algorithm to fix different problems on account of their approximation features.With respecting to the weights back-propagation algorithm computes explicit gradients of the error such as the mean of square error.However, ANNs trained with gradient descent based learning algorithm generally fall of slow convergence and local minima.To get rid of these issues metaheuristic algorithm that uses global search to find the solution has been used to train ANNs.For instance, the genetic algorithm utilized to training ANNs [2]; another method employed in order to train artificial neural network is reported by Slowik [3].He used an adaptive differential evolution algorithm to train ANNs.The aim of the calculation that is achieved is to find optimal weights regarding an error rate [3].Salman Mohaghegi et al. analyze the performance of particle swarm optimization (PSO) and they compared its convergence speed and robustness of the algorithm with the back-propagation algorithm.The learning algorithms have been utilized in order to modify the output synaptic weight array, and in each algorithm, the centers and widths in the hidden layer are inferred utilizing an offline clustering approach.According to the results, the PSO algorithm showed good performance even with a fewnumbers of particles.The statistical result proved PSO as a robust algorithm for learning ANNs [4].David J. Montana and Lawrence Davis utilized an evolutionary algorithm for enhancing the performance of feed word artificial neural networks.In addition, the algorithm added more domain-specific knowledge into ANNs [5].Malinak and Jaksa utilized evolutionary algorithm with a local search algorithm to get the best performance [6].Kenndy and Eberhat used swarm intelligence PSO for the first time [7].Carvalho and Ludermir applied PSO to medical benchmark problems.They compared the statistical result with a local search operator in order to learn ANNs.The result showed a better performance than other well-studied algorithm [8].Apart from PSO, the scientists applied other swarm intelligence such as ant colony algorithm.Blum and Socha employed a new version of ACO algorithm for training artificial neural network [9].Liang Hu, Longhui Qin, Kai Mao, Wenyu Chen,and Xin Fu used a genetic algorithm for training ANNs to solve the real-life problem of Multipath Ultrasonic Gas Flowmeter.They used GANNs for multipath ultrasonic flowmeter (UFM) to help decrease its error rate when detecting the flowrate of the complicated flow field.The evaluation of results showed by ANNs and GANNs demonstrated better achievement for ANNs trained by genetic algorithm compared with once trained with gradient descent based algorithm [10].In addition to genetic algorithm, to find the optimal weight for a neural network, the genetic algorithm has been utilized to enhance the whole structure for artificial neural networks.For example, K. G. Kapanova, Dimov and M. Sellier utilized GA for optimizing the architecture of ANNs.This approach allowed rearranging hidden layers, neurons in every hidden layer, the connections between neurons and the activation function [11].In this study, we improved the Nelder-Mead optimization method and used to determine the optimum weight values of ANNs.vIn the proposed improved Nelder-Mead-ANNs classification algorithm, ANNs classification algorithm performs well in the learning stage.The paper is organized as follows: The ANNs classification algorithm is explained in the second section.The standard Nelder-Mead optimization method is explained in the third section.The improved Nelder-Mead optimization method is explained in the fourth section and the experimental results are discussed in the fifth section.Finally, the conclusion is explained in the last section.
Artificial Neural Networks
ANNs consist of neurons and these neurons that connect together.These neurons can bind to each other very complexly as in the real neurons system.Each neuron has a different weight input and one output.For this purpose, the sum of the entries with different weights is expressed by the following Equation 1 [12].
Where, m is the number of neurons in the layer, w is the weight between i and j neuron and b is a bias neuron.The neuron output value is obtained after the output of net function passes through the activation function.Usually, in ANNs systems, the sigmoid activation function is used.The sigmoid activation function is shown in Equation 2 [13].
The error rate between the output from the ANNs system and the actual output is calculated.The mean square error is given in Equation 3 below.
Where, n representsa number of instances in datasets, Ox is the output generated from the x th inputs and Tx is the target output of the x th inputs.After training data is completed, the trained network can estimate the result of any given dataset according to the last state of the weight values.This process is called network training.
There are different training algorithms in network learning.In this study, the Nelder-Mead optimization method is used in the network training process.
Nelder-Mead optimization method
Nelder-Mead is a simple optimization method developed by Nelder&Mead [17].It is also called as Amoeba method in many kinds of literature.It is a method widely used in multi-dimensional unconstrained optimization problems.It is very simple to understand the basic Nelder-Mead method and its usage is very easy.For this reason, it is a very popular method especially in many fields of chemistry and medicine and monolith.It is widely used for solving parameter estimation and statistical problems.It can also be used in the field of experimental mathematics [14].
Improved Nelder-Mead optimization method
Since the Nelder-Mead optimization method is simple, it is also known as a simple optimization method.In this study, the Nelder-Mead optimization method is combined with PSO algorithm to find optimum value of ANNs weights.In the proposed classification algorithm, firstly, the weights of the ANNs classification algorithm are generated randomly and the MSE is calculated, then the best result is selected.The weights are updated according to the best solution obtained from random solutions according to the Nelder-Mead optimization method.Then the best MSE is selected.In the next iteration, if the difference error of the previous iteration and of the current error is less than 0.5 update the weights according to the velocity equation of the PSO algorithm is given in equations 4 and 5 below.
+1 = + +1 Where, t is the number of iteration, w is inertia weight, d is dimension, xid current position of particle, v t the previous velocity of particle, Pbesti is a better fitness value, Gbest is a better fitness value of all particles, c1 and c2 are two positive constants represents accelerate factor and r1 and r2 are two random functions in the range [0,1].Otherwise, the weight updating according to the Nelder-Mead optimization method functions.Optimization algorithms are used in many engineering areas and aim to find the best end result.In this study, the improved Nelder-Mead optimization method is used to find the optimum weights in the ANNs classification model.The flowchart of the proposed ANNs classification model is given in Fig. 1 below.
The experimental results
In the experimental results section, the performance of the two methods is compared by applying the developed Nelder-Mead method and the standard Nelder-Mead method in the training of the ANNs classification algorithm.The codes of standard Nelder-Mead and developed Nelder-Mead methods were written in the visual studio.The performance of the classification algorithms was applied to nine UCI datasets [16].The characteristic of datasets is given in Table 1.An appropriate ANNs was created for each dataset and the number of nodes in the hidden layer was set.Bias node was used in the hidden and output layers and sigmoid was used as the activation function.In the Nelder-Mead optimization method, the values α, γ, ρ, α, and σ were set to 2, 0.5, 1, and 0.
Conclusion
In algorithm can be applied to many fields such as medical, engineering and recognition systems.
Fig. 1 .
Fig. 1.Flowchart of the proposed ANNs model our study, we performed the training of the ANNs classification algorithm with standard Nelder-Mead and improved Nelder-Mead optimization methods.The improved Nelder-Mead optimization method was developed because the standard Nelder-Mead optimization method did not fulfill this task properly.With the help of the PSO optimization method, the developed Nelder-mead optimization method gave better results in the ANNs training.The improved Nelder-Mead-ANNs classification algorithm was applied to 9 datasets of UCI machine learning repository and performed better on all datasets.So the developed classification The Nelder-mead optimization method is a simple method used to find the local minimum of several variable functions generated by Nelder and Mead.Simplex is an n-dimensional geometric shape.It includes (N + 1) points in N dimensions.The simplex in two dimensions consists of a triangle.It becomes a triangular prism in three dimensions (4 surfaces).It is an example of a research method that compares the function values at three corner points of a triangle.By analyzing the function at all given points, the program moves the highest and lowest values to the reflection point relative to the surface above the highest and lowest elements.For this reason, this worst corner is replaced by a new point.So we get a new triangle.The search is now resumed in this new triangle, so that the process becomes a triangle array where the function value becomes smaller and smaller at the corner point value [15].The steps of the Nelder-Mead optimization method are given below. + ( − 0 ). ( ) < ( ) ℎ , ℎ ℎ . 7. , ( ) ≥ ( ) 0 . = 0 + ( 0 − ). ( ) < ( ) . 8. ℎ, ℎ 1 ℎ = 0 + ( − 1 ) 2.
In two dimensions, this method is a model search method that compares the function values at the corners of a triangle.When we consider the minimization state of a bivariate z=f (x, y) function, the value of the z function at the worst corner of the triangle (w worst vertex) is greatest.
Table 1 .
The characteristic of datasets 5, respectively.Initial values of weights were between 10 and -10 and the algorithm was run at 1000 iterations.MSE was used as an activation function in both the Nelder-Mead optimization method.The classification performance of the algorithm is calculated based on the final value of this fitness function.10-Foldcross-validationwasused for classification accuracy (CA).The classification results of the proposed improved Nelder-Mead-ANNs classification algorithm and the standard Nelder-Mead-ANNs classification algorithm is given in table 2 below.In this table, the CA and MSE obtained in the training and testing stages of both classification algorithms are given.Also, H is the optimal number of neurons in a hidden layer.When we examine the experimental results of Table2, the proposed Nelder-Mead-ANNs classification algorithm showed better classification accuracy and MSE in alldatasets than the standard Nelder-Mead-ANNs classification algorithm. | 2,669.8 | 2018-12-27T00:00:00.000 | [
"Computer Science"
] |
The Tribonacci-type balancing numbers and their applications
. In this paper, we define the Tribonacci-type balancing numbers via a Diophantine equation with a complex variable and then give their miscellaneous properties. Also, we study the Tribonacci-type balancing sequence modulo m and then obtain some interesting results concerning the periods of the Tribonacci-type balancing sequences for any m . Furthermore, we produce the cyclic groups using the multiplicative orders of the generating matrices of the Tribonacci-type balancing numbers when read modulo m . Then give the connections be-tween the periods of the Tribonacci-type balancing sequences modulo m and the orders of the cyclic groups produced. Finally, we expand the Tribonacci-type balancing sequences to groups and give the definition of the Tribonacci-type balancing sequences in the 3 -generator groups and also, investigate these sequences in the non-abelian finite groups in detail. In addition, we obtain the periods of the Tribonacci-type balancing sequences in the polyhedral groups (2 , 2 , n ) , (2 , n, 2) , ( n, 2 , 2) , (2 , 3 , 3) , (2 , 3 , 4) , (2 , 3 , 5) .
It is important to note that T n = T (0,1,1) n are ordinary Tribonacci numbers. For a finitely generated group G = ⟨A⟩ where A = {a 1 , a 2 , . . . , a n }, the x i+j−1 , i ≥ 0, is called the Fibonacci orbit of G with respect to the generating set A, denoted F A (G) (cf. [4,5]).
A sequence is periodic if, after a certain point, it consists only of repetitions of a fixed subsequence. The number of elements in the shortest repeating subsequence is called the period of the sequence. For example, the sequence a, b, c, d, b, c, d, b, c, d, . . . is periodic after the initial element a and has period 3. A sequence is simply periodic with period k if the first k elements in the sequence form a repeating subsequence. For example, the sequence a, b, c, d, a, b, c, d, a, b, c, d, . . . is simply periodic with period 4.
The polyhedral (triangle) group (l, m, n) for l, m, n > 1, is defined by the presentation (l, m, n) = ⟨x, y, z | x l = y m = z n = xyz = e⟩.
Behera and Panda [2] defined the sequence of balancing numbers by the aid of the equation (1) and then gave its miscellaneous properties. Since then obtaining a recurrence sequence by using a certain Diophantine equation have been a topic of current. In literature, one can find any interesting properties and applications of the balancing-like sequences which are obtained from a certain Diophantine equation; see for example, [3, 8, 20, 25-27, 31, 32]. We derive here a new recurrence sequence by using a Diophantine equation with a complex variable and called the Tribonacci-type balancing sequence.
In the first part of the paper, we give number theoretic properties of the Tribonacci-type balancing sequence.
The study of the behavior of the linear recurrence sequences under a modulus began with the earlier work of Wall [37], where the periods of the ordinary Fibonacci sequences modulo m were investigated. It is important to note that the period of a recurrence sequence modulo m with the period of this sequence in the cyclic group C m are the same. Lu and Wang contributed to the study of Wall numbers for k-step Fibonacci sequence [28]. Recently, the theory extended to some special linear recurrence sequences by several authors; see, for example, [9-12, 16, 17, 19, 34, 36]. Patel and Ray [33] studied the period, rank and order of the sequence of the balancing number modulo m. In the second part of the paper, we consider the Tribonaccitype balancing sequence modulo m and then we derive some interesting results concerning the periods of the Tribonacci-type balancing sequences for any m. Also, we produce the cyclic groups using the multiplicative orders of the generating matrices of the Tribonacci-type balancing numbers when read modulo m. Then we give the connections between the periods of the Tribonacci-type balancing sequences modulo m and the orders of the cyclic groups produced.
In the mid-eighties, Wilcox applied the idea which was firstly introduced by Wall to the abelian groups [38]. The theory was expanded to some finite simple groups by Campbell et al. [5], where the Fibonacci sequence in a nonabelian group generated by two generators were introduced. The concept of the Fibonacci sequence for more two generators had also been considered by several authors; see, for example, [1,4,18,22,23,29,30]. In [9,11,12,16,17,21,29], the authors studied some special linear recurrence sequences defined by the aid of the elements of a group. In the next process, the theory was extended to the quaternions and the complex numbers, see [13][14][15]. In the third part of the paper, we give the definition of the Tribonacci-type balancing sequences in the 3-generator groups and then we investigate these sequences in the non-abelian finite groups in detail. Finally, we obtain the periods of the Tribonacci-type balancing sequences in the polyhedral groups (2, 2, n), (2, n, 2), (n, 2, 2), (2, 3, 3), (2,3,4), (2,3,5) as applications of the results produced.
Results
A positive integer n is called a Tribonacci-type balancing number if i + i 2 + i 3 + · · · + i n−1 = i n+1 + i n+2 + · · · + i n+k for some positive integer k, where i = √ −1. The positive integer k is called as the Tribonacci-type balancer of corresponding to the Tribonaccitype balancing number n.
First few Tribonacci-type balancing numbers are 4, 5, 8, 9 and 12 with balancer 3, 4, 7, 8 and 9, respectively. For n ≥ 1, the n th Tribonacci-type 26 The Tribonacci-type balancing numbers and their applications balancing number B i,n is defined recursively by with initial conditions B i,0 = 4, B i,1 = 5 and B i,2 = 8.
Using an inductive argument, we derive the following relations via the equation in the definition of the Tribonacci-type balancing numbers: It is clear that the auxiliary equation of the Tribonacci-type balancing Using the equation (3), we can give a Binet formula for the Tribonacci-type balancing numbers by By a simple calculation, we obtain the generating function of the Tribonaccitype balancing numbers as shown: Now we give an exponential representation for the Tribonacci-type balancing numbers by the aid of the generating function g(x) with the following Proposition.
Proposition 1. The Tribonacci-type balancing sequence {B i,n } have the following exponential representation: Problem 1. By a simple calculation, we may write So we have the conclusion.
If we reduce the Tribonacci-type balancing sequence {B i,n } by a modulus m, taking least nonnegative residues, then we get the following recurrence sequence: is used to mean the jth element of the Tribonacci-type balancing sequence when read modulo m. We note here that the recurrence relations in the sequences {B i,n (m)} and {B i,n } are the same. Proof. Consider the set Since |S| = m 3 , there are m 3 distinct 3-tuples of the Tribonacci-type balancing sequence modulo m. Thus, it is clear that at least one of these 3-tuples appears twice in the sequence {B i,n (m)}. Therefore, the subsequence following this 3-tuple repeats; that is, . From the equation (2), we may write the following relations: which implies that the Tribonacci-type balancing sequence modulo m is simply periodic. □ Let the notation P B i (m) denote the smallest period of the sequence {B i,n (m)}.
From the equation (2), we may write the following companion matrix: The Tribonacci-type balancing numbers and their applications The matrix C i is said to be the Tribonacci-type balancing matrix. Then we can write the following matrix relation: By mathematical induction on n, it is easy to see that the n th powers of the matrix C i are Given an integer matrix A = [a ij ], A (mod m) means that all entries of A are modulo m, that is, A (mod m) = (a ij (mod m)). Let us consider the set ⟨A⟩ m = {(A) n (mod m) | n ≥ 0}. If (det A, m) = 1, then the set ⟨A⟩ m is a cyclic group; if (det A, m) ̸ = 1, then the set ⟨A⟩ m is a semigroup. Since det C i = −1, the set ⟨C i ⟩ m is a cyclic group for every positive integer m ≥ 2. From (5), it is easy to see that the cardinality of the set ⟨C i ⟩ m cannot be odd. Thus, for m ≥ 2, we obtain which yields that |⟨C i ⟩ m | = 2m. Now we give the connections between the periods of the Tribonacci-type balancing sequences modulo m and the orders of the cyclic groups produced with the following Theorem.
Theorem 2. For any m ≥ 2, Proof. In fact it is easy to see that the Tribonacci-type balancing sequence {B i,n } conforms to the following pattern: Let G be a finite k-generator group and let We call (x 1 , x 2 , . . . , x k ) a generating k-tuple for G. Now we redefine the Tribonacci-type balancing sequence by means of the elements of a group which have three generators. Definition 1. Let G be a 3-generator group and let (x 1 , x 2 , x 3 ) be a generating 3-tuple of G. For generating 3-tuple (x 1 , x 2 , x 3 ), we define the Tribonacci-type balancing orbits of the first and second kind of the group G, respectively by: For generating 3-tuple (x 1 , x 2 , x 3 ), we denote the Tribonacci-type balancing orbits of the first and second kind of G by the notations B Theorem 3. Let G be a 3-generator group and let (x 1 , x 2 , x 3 ) be a generating 3-tuple for G. If G is finite, then the sequences B Proof. Let us consider the sequence B (1) (x 1 ,x 2 ,x 3 ) (G). Suppose that n is the order of G. Since there are n 3 distinct 3-tuples of elements of G, at least one of the 3-tuples appears twice in the sequence B (1) (x 1 ,x 2 ,x 3 ) (G). Therefore, the subsequence following this 3-tuple repeats. Because of the repetition, the sequence is periodic. Then we have the natural numbers i and j, with The Tribonacci-type balancing numbers and their applications From the defining recurrence relation of the Tribonacci-type balancing orbit of G, it is easy to see that b (1) The proof for the Tribonacci-type balancing orbit of the second kind of G is similar to the above and is omitted. □
Let the notations LB
(1) , respectively. From the definitions, it is clear that the lengths of the periods of the Tribonaccitype balancing orbits of the first and second kinds of a finite non-abelian 3-generator group depend on the chosen generating set and the order in which the assignments of x 1 , x 2 , x 3 are made.
Consider the sequences
It is easy to prove that the sequences {u n } and {v n } modulo m are periodic.
Conclusion
In this paper, the Tribonacci-type balancing numbers were defined and their miscellaneous properties were given. Also, taking into account the Tribonacci-type balancing sequence modulo m, some interesting results concerning the periods of the Tribonacci-type balancing sequence for any m were obtained. In addition, the cyclic groups from the generating matrices of the Tribonacci-type balancing numbers when read modulo m were produced. Finally, the Tribonacci-type balancing sequence to groups were expanded and then the periods of these sequences in the finite polyhedral groups were examined. | 2,787.4 | 2023-01-01T00:00:00.000 | [
"Mathematics"
] |
γBOriS: Identification of Origins of Replication in Gammaproteobacteria using Motif-based Machine Learning
The biology of bacterial cells is, in general, based on the information encoded on circular chromosomes. Regulation of chromosome replication is an essential process which mostly takes place at the origin of replication (oriC). Identification of high numbers of oriC is a prerequisite to enable systematic studies that could lead to insights of oriC functioning as well as novel drug targets for antibiotic development. Current methods for identyfing oriC sequences rely on chromosome-wide nucleotide disparities and are therefore limited to fully sequenced genomes, leaving a superabundance of genomic fragments unstudied. Here, we present γBOriS (Gammaproteobacterial oriC Searcher), which accurately identifies oriC sequences on gammaproteobacterial chromosomal fragments by employing motif-based DNA classification. Using γBOriS, we created BOriS DB, which currently contains 25,827 oriC sequences from 1,217 species, thus making it the largest available database for oriC sequences to date.
Introduction
Before every cell division, bacteria need to duplicate their genetic material in order to ensure that no information is lost. This essential process, called DNA replication, initiates in a highly regulated manner at specific chromosomal sites called oriC and is coordinated with many other cellular mechanisms (1,2). Usually, bacteria contain (multiple compies of) a single chromosome and this chromosome contains a single oriC sequence, although there are exceptions as, e.g., Vibrionales contain two chromosomes (3,4). As many different proteins need to bind to and act upon oriC in order for initiation to occur, oriC contains many different protein binding sites and DNA motifs (5,6). While there is a high level of variation between oriC sequences of different organisms, there are also some nearly universal features of oriC sequences (7)(8)(9). Among these are 9 bp short DNA motifs called DnaA boxes, which act as binding site for the initator protein DnaA, and AT-rich regions, where the DNA double helix unwinds before the replication machinery is loaded onto the DNA (10,11). Furthermore, oriC contains binding sites for proteins that relay information on the status of the cell. Therefore, oriC sequences can be considered as biological information compiler and processors (12).
All currently available computational methods for the identification of oriC sequences in bacterial chromosomes rely on nucleotide disparities on the leading and lagging strand of the DNA double helix (13)(14)(15)(16)(17). As replication usually extends from oriC bidirectionally, it is one of two chromosomal sites where the leading and lagging strand switch places. The most frequently used disparity, the GC skew, usually assumes a V-or inverted V-shape with its minimum indicating the presence of the origin of replication (18,19). However, due to natural variation, the shape of the skew can only reliably be asserted when analysing whole chromosomal sequences. Combining the GC skew with the location of DnaA boxes, Ori-Finder (20), was used to create the current stateof-the-art oriC database DoriC (21,22). While existing methods for the annotation of oriC sequences are mainly based on statistical approaches, motif-based approaches for DNA sequence classification by machine learning might be a promising alternative. Machine learning methods, in particular deep neural networks (CNNs) have been widely used already for similar tasks (23)(24)(25)(26)(27)(28)(29). However, these methods are notorious for needing big amounts of data and computing power. Support vector machines (SVMs) that perform classification on the basis of k-mer (i.e., n-gram) counts represent a less data-intensive alternative, and have even been shown to outperform CNNs when training data is small in number (30,31). Some k-mer-SVMs even allow mismatches or gaps in these k-mers, leading to more realistic models of DNA motifs, which are subject to natural variation (32)(33)(34). In the current study, we present γBOriS (Gammaproteobacterial oriC Searcher), a tool that is able to identify oriC sequences for Gammaproteobacteria. This class of organisms contains many model organisms (e.g., Escherichia coli, Vibrio cholerae and Pseudomonas putida), and causative agents for serious illnesses such as such as cholera, plague and enteritis, which makes it a highly relevant study object. Making use of recent developments in the fields of DNA sequence classification and machine learning, γBOriS enables oriC identification on both full chromosomes as well as chromosomal fragments, which drastically increases the number of sequences that can be searched for oriC sequences. Finally, using publicly available Gammaproteobacterial chromosomal fragments as input for γBOriS, we gathered the largest dataset of bacterial oriC sequences available to date, BOriS DB.
Materials and Methods
A. Data curation and creation. A ground truth oriC dataset was compiled using a semi-automated method described in (9,35,36). A given chromosome is first split it into 2.5 kb fragments that are centered around intergenic regions and then, for those fragments close to the minimum of the chromosome's cumulative GC-skew, their respective probability of unwinding is calculated using WebSIDD (37). Default values (37°C, 0.1 M salt, circular DNA, copolymeric) were chosen for the predictions, and negative superhelicity values were tested in the range of σ. A dataset of seed sequences was created by extracting the central 9 bp from oriC sequences in the ground truth dataset. Negative sequences for the initial classification training were collected by picking, for each chromosome present in the positive dataset, another sequence of the same size with the same seed sequence from the respective chromosome. In order to be able to identify the optimal fragment length for classification, the length of both the positive as well as the negative sequence were varied from 150 to 1500 bp in steps of 50 bp For cutoff selection, a highly imbalanced dataset was created by extracting all fragments of a given length around each of the seed sequences from each of the chromosomes in the positive dataset. Both the balanced as well as the imbalanced datasets were split into training and testing datasets using a 70%-30% split, leading to 318 chromosomes in the former and 141 chromosomes in the latter. Chromosomes were downloaded from the NCBI refseq ftp server. For BOriS DB, a list of Refseq organisms was taken from ftp://ftp.ncbi.nlm.nih.gov/genomes/ refseq/bacteria/assembly_summary.txt, a list of chromosomes for the UBA genomes was taken from Supplementary 2 of (38).
B. Sequence classification using LS-GKM models.
The support vector machines used as classifiers in this study derive distance matrices from a set of input sequences by counting substrings and comparing their numbers in sequence pairs directly, making these approaches faster and less memory intensive. The Spectrum Kernel is based on simple k-mer composition differences (30). However, LS-GKM and gkm-SVM models calculate differences between k-mers by allowing for mismatches and small differences between the k-mers (33, 39).
C. oriC database comparisons. Comparisons were performed between BOriS DB v1 and DoriC v6.5, which were the latest accessible versions of the databases at the time of writing. Pairs of sequences from different datasets were compared by calculating the length of the longest common substring and dividing it by the length of the shorter sequence. Two sequences were considered to be identical if the relative sequence identity was above a cutoff of 0.7. This cutoff was chosen in order to include overlapping sequences. The internal consistency of the sequence datasets was evaluated by calculating all-vs-all sequence similarities from pairwise sequence alignments after making the sequences in the datasets of same length. Using multidimensional scaling and hierarchical clustering (as implemented in the Python packages scikit-learn and scipy, respectively (40,41)), these distance matrices were visualized. A database was deemed more consistent if the degree of clustering is higher or if oriC sequences from closely related organisms are close on the tree.
Results
Implementation of γBOriS. The stand-alone version of γBOriS is implemented in R and requires a Linux operating system, whereas the frontend of the webserver is written in jQuery and can be used without any software requirements. As input file, γBOriS takes a fasta-formatted file containing one or more DNA sequences of any length and returns two fasta-formatted text files: One contains fragments γBOriS identified as oriC and the other contains DNA fragments for which the classifier abstained from a decision (see Methods). γBOriS is composed of three modules that were adjusted for 1). The core module consists of a motif-based sequence SVM, whose parameters were chosen in order to maximize the AUC on discrimination of oriC from non-oriC sequences in a balanced dataset (see Methods, fig. 2). To this end, we trained a total of 12,877 LS-GKM and Spectrum Kernel SVMs (30,32,33) with varying parameters and sequence fragment sizes. The highest AUC of 0.977 on the test dataset, was achieved with a LS-GKM model trained with 1250 bp fragments, a word length of 10 bp with 6 informative columns and at most 4 mismatches (see supplementary information).
To turn this sequence classifier into a sequence identifier, the first module of γBOriS splits the input sequence into a manageable number of candidate fragments by picking only fragments centered around an occurence of a so-called seed sequence. This list of seed sequences was created based on the initial oriC dataset by extracting the central 9bp sequences from the oriC sequences used for training of the classifier. This choice of seed sequences was validated by showing that all oriC sequences in the test dataset are centered around one of the seeds defined this way from sequences in the training dataset.
Finally, the third module of γBOriS assigns a class label to every fragment based on the classification value obtained for this sequence in the second module. As, for one input sequence, the number of candidate sequences is expected to be much higher than the number of correct oriC sequences, this is a highly imbalanced problem. To mitigate high numbers of false positive classifications, we make use of the concept of classification with abstaining (42). To this end, two cutoffs are employed; below the lower cutoff, fragments are labeled "negative", above the upper cutoff, fragments are labeled "positive" and between the cutoffs, the classifier abstaines from labeling the fragments. In the choice of cutoffs, we aim to both maximize the value of F1 and minimize the number of correct oriC sequences for which this module abstained from classification, leading to a Pareto-optimal state. For the sequences used to train γBOriS, we found that nor-malizing the classification values of the fragments extracted from one sequence to a range between [0, 1] and employing cutoffs of 0.99 and 0.41 lead to the best result on the test dataset (F1 of 0.943 with 0.7% of correct oriCs in the abstained space). γBOriS as a web tool as well as a standalone software and its source code are freely available at BOriS.heiderlab.de.
Construction of BOriS DB.
In order to create a large dataset for gammaproteobacterial oriC sequences, we applied γBOriS to all chromosomes and chromosomal fragments present in the Refseq database (restricted to sequences with the release type "Major") as well as the genomes in the Uncultivated Bacteria and Archaea (UBA) dataset (38). Both datasets contain a high number of incompletely sequenced chromosomes and chromosomal fragments. After discarding sequences present in both databases, we retained 25,827 oriC sequences from 1,217 different gammaproteobacterial species, most of which were not identified yet. These sequences constitute the first version of BOriS DB and are available for download at boris.heiderlab.de.
Comparison to DoriC.
Due to the fact that only very few oriC sequences are experimentally confirmed for Gammaproteobacteria, and there is no established oriC benchmark dataset, a direct comparison of oriC identification tools is infeasible. Therefore we compared sequences collected using γBOriS to the current state-of-the-art oriC database, DoriC (21,22). To this end, we created an oriC dataset by using the 462 chromosomes present in DoriC as input for γBOriS.
For 330 of the chromosomes listed in DoriC we find the same oriC sequence, however, for 156 there is disagreement. To evaluate which of the datasets is more consistent, we calculated pairwise similarity matrices for all sequences in DoriC and the results from γBOriS, respectively. The underlying assumption behind this method is that the oriC sequences of different organisms are related evolutionarily and therefore show a high amount of similarity; mis-identified sequences will be more different to the other sequences in the dataset.
Visualization using multidimensional scaling shows that the sequences identified using γBOriS, generally form tighter clusters than the sequences stored in DoriC (fig. 3), which indicates more consistency. This result is also supported by phylogenetic trees derived from the pairwise distance matrices (see supplementary information). A closer inspection of these results shows that while for most orders, γBOriS is more consistent than DoriC (as, e.g., for Vibrionales and Xanthomonadales), the contrary is true for chromosomes from, e.g., Methylococcales and Thiotrichales. For many Gammaproteobacterial endosymbionts, oriC sequences are not well-identified neither in DoriC nor by γBOriS, which is due to the fact that most of these genomes lack a dnaA gene and rely on a different initiation method (43).
Discussion
Currently, one of the most promosing applications for machine learning methods in bioinformatics is the classification and identification of DNA sequences (44,45). While machine learning methods have already been employed for the identification of origins of replication in yeast (46), oriC identification in bacterial chromosomes is still performed based on chromosome-wide nucleotide disparities such as the GC-skew. As these methods are limited to fully sequenced chromosomes, no oriC sequences can be identified for a huge number of only fragmentarily sequenced genomes. Furthermore, the methods developed for eukaryotic chromosomes cannot easily applied to bacterial chromosomes as the composition of these sequences are radically different (47). In contrast, γBOriS, which we introduce here and which makes use of a motif-based machine learning method, is able to identify oriC sequences on chromosomal fragments as well as full chromosomes of Gammaproteobacteria. Due to the fact that there is a high degree of variance in oriC structure between taxonomic classes (7,43), we limited the scope of γBOriS to Gammaproteobacteria. Furthermore, as most secondary chromosomes do not rely on the initiator protein DnaA for replication initiation (48,49), and as they are rather rare in bacterial cells (3), we also excluded these from the scope of the tool and focused only on primary chromosomes. Using different training datasets, the general approach of γBOriS can easily be adapted for other groups of organisms. Suitable datasets, however, are currently not easily available in the necessary amount and quality (e.g., same-sized, centered, and co-oriented) because current oriC identification methods do not provide the identified sequences in this manner. The semi-automatic method used to create an initial oriC dataset in this study assumes that (I) oriC is intergenic, (II) close to the global GC skew minimum, and (III) defined by the DUE, as well as (IV) the presence of DnaA boxes. The fact that this method requires manual decision-making makes it hard to automate it, but also ensures that the weight of the assumptions can be balanced and adjusted for every single case. Therefore, we consider this method highly accurate, which is supported by the fact that oriC sequences identified have been confirmed experimentally (9,35,36). Being solely trained on sequences gathered with the method used in this paper, γBOriS can be used as an automatization and extention of it. By applying γBOriS on fragments deposited in public sequence databases, we created BOriS DB, the largest availabe database of oriC sequences to date. BOriS DB currently contains 25,827 sequences from 1,217 species, most of which were not identified yet. The sequences in this database show a high degree of consistency, which indicates a high degree of accuracy in prediction (see fig. 3 and supplementary information). A comparison of BOriS DB to DoriC suggests that both databases are more reliable for some taxonomic groups than for others, with γBOriS, in total, showing a better performance. γBOriS enables researchers to identify oriC sequences on fragments of bacterial chromosomes making it possible to integrate it into next-generation sequencing pipelines. As can be seen from the construction of BOriS DB, this leads to a large amount of newly-identified oriC sequences, among which are sequences from organisms that are notoriously hard to sequence and impossible to culture. This enables the use of data-intensive cutting-edge methods such as deep learning (50) for the identification previously unknown initiation factors, which might, due to their high degree of taxonomic specificity, be good candidates for targets of new antibiotics (51). Furthermore, a deeper knowledge of the components of of oriC will make it possible to de novo design chromosomes with desired replication characteristics and synthetic oriC sequences (52). γBOriS as a web tool as well as a stand-alone software, its source code, and BOriS DB are freely available at BOriS.heiderlab.de. | 3,827.6 | 2019-04-02T00:00:00.000 | [
"Biology"
] |
Modeling of Disordered Protein Structures Using Monte Carlo Simulations and Knowledge-Based Statistical Force Fields
The description of protein disordered states is important for understanding protein folding mechanisms and their functions. In this short review, we briefly describe a simulation approach to modeling protein interactions, which involve disordered peptide partners or intrinsically disordered protein regions, and unfolded states of globular proteins. It is based on the CABS coarse-grained protein model that uses a Monte Carlo (MC) sampling scheme and a knowledge-based statistical force field. We review several case studies showing that description of protein disordered states resulting from CABS simulations is consistent with experimental data. The case studies comprise investigations of protein–peptide binding and protein folding processes. The CABS model has been recently made available as the simulation engine of multiscale modeling tools enabling studies of protein–peptide docking and protein flexibility. Those tools offer customization of the modeling process, driving the conformational search using distance restraints, reconstruction of selected models to all-atom resolution, and simulation of large protein systems in a reasonable computational time. Therefore, CABS can be combined in integrative modeling pipelines incorporating experimental data and other modeling tools of various resolution.
Introduction
There is a growing body of evidence that some proteins act in multiple structural states [1]. It has been demonstrated that the ability of these proteins to switch between distinct structural states may be crucial for their function and regulation [1]. Additionally, a number of key biological functions have been proven to be performed by disordered or partially unstructured proteins [2]. Some proteins fold and obtain their structure only upon binding to their partners, while others form so called "fuzzy complexes" in which both proteins retain a certain degree of disorder [3]. These discoveries modified the core biochemistry principle of "structure determines function". As for now, a consensus has been reached that protein function may be a result of an interplay between protein structure and its dynamics [4,5].
Internal protein motions may be studied both experimentally and with computational methods [6,7]. For example, nuclear magnetic resonance (NMR) spectroscopy is one of the richest sources of information on protein structure and dynamics, especially when accompanied with assisting
CABS Dynamics and Interaction Model
Since its development, the CABS model (C-alpha, C-beta and Side chain model) has been applied to a variety of modeling problems, such as protein folding mechanisms [49,50,[52][53][54][55][56][57], protein structure prediction [58][59][60][61], protein-peptide docking including large-scale conformational flexibility [62][63][64][65][66][67][68] and simulations of near-native fluctuations of globular proteins [69][70][71][72][73]. When combined with careful bioinformatics selection of the generated models, CABS proved to be one of the two most accurate structure prediction tools evaluated in the CASP (Critical Assessment of protein Structure Prediction) experiment [60]. The CABS model uses up to four atoms or pseudo-atoms per residue (see the description below), but outputs protein systems in C-alpha representation only. Therefore, for practical applications, the obtained models need to be reconstructed to all-atom representation. In various multiscale modeling tools discussed below, CABS has been integrated with the MODELLER-based reconstruction procedure [74]. Other reconstruction scenarios are also possible to ensure the best possible quality of local protein structure. This can be realized by combination of different tools for protein backbone reconstruction from the C-alpha trace and side chain reconstruction, like BBQ [75] or SCWRL [76] for example, and optionally further refinement [77].
In this review, we discuss the applicability of the CABS CG model and its knowledge-based statistical force field [28] to the modeling of disordered or unfolded protein states. In the CABS model the polypeptide chain representation is reduced to up to four unified atoms per residue (see Figure 1). These interaction centers represent lattice-confined C-alpha atoms, C-beta atoms, the united side chain pseudo-atom, and additionally, pseudo-atoms representing geometrical centers of peptide bonds needed to define the hydrogen pseudo-bond. An example of a polypeptide chain in CABS representation is presented in Figure 1b. Even though the restriction of the C-alpha trace to the underlying low spacing (0.61 Å [28]) cubic lattice may appear to be a drastic simplification, it is not. Allowing small fluctuations of the C-alpha, C-alpha distance enables hundreds of possible orientations of this pseudo bond, and thereby the resulting model chains do not show any noticeable directional biases. Furthermore, the averaged resolution of the C-alpha traces is acceptable and below 0.5 Å [28]. Additionally, the lattice representation enables pre-calculation of local moves and corresponding changes of interactions, leading to a few times faster simulations in comparison with otherwise equivalent continuous space CG models [11].
The CABS model uses a knowledge-based statistical force field that consists of generic, sequence-independent interaction terms that favor protein-like conformations, and sequencedependent interaction terms that determine some structural details [11,28,78]. The generic force field terms are derived from general features of polypeptide chains that result in protein-like behavior of the model chains. They account for properties of protein chains such as local stiffness, their biases toward secondary structures and packing compactness. The residue-residue interaction terms are derived from contact geometry statistics derived from folded globular proteins (illustrated in Figure 2a). Nevertheless, the local packing regularities in unfolded states appear to be very similar to that observed in native structures [11,28,33]. Thereby, CABS simulations provided correct pictures of protein folding [49,[52][53][54][55][56]60] and flexibility of globular proteins [70,71].
The resulting force field takes a form of a precomputed matrix of contact pseudo-energies, presented schematically in Figure 2b. Additionally, to allow successful modeling of membrane proteins the CABS force field can be extended by introducing effective dielectric constant terms [79]. underlying low spacing (0.61 Å [28]) cubic lattice may appear to be a drastic simplification, it is not. Allowing small fluctuations of the C-alpha, C-alpha distance enables hundreds of possible orientations of this pseudo bond, and thereby the resulting model chains do not show any noticeable directional biases. Furthermore, the averaged resolution of the C-alpha traces is acceptable and below 0.5 Å [28]. Additionally, the lattice representation enables pre-calculation of local moves and corresponding changes of interactions, leading to a few times faster simulations in comparison with otherwise equivalent continuous space CG models [11].
(a) (b) Figure 1. A three-residue protein fragment in: all-atom (a) and CABS model (b) representation. The spheres represent atoms: blue, C-alpha and C-beta atoms (the same in both representations); yellow, side chain atoms (one pseudo-atom in CABS); red, atoms involved in the peptide bond (one pseudoatom in CABS placed in the geometric center of the peptide bond. A single slice (layer) of the lattice that confines the C-alpha trace in the CABS model is also presented. The spheres represent atoms: blue, C-alpha and C-beta atoms (the same in both representations); yellow, side chain atoms (one pseudo-atom in CABS); red, atoms involved in the peptide bond (one pseudo-atom in CABS placed in the geometric center of the peptide bond. A single slice (layer) of the lattice that confines the C-alpha trace in the CABS model is also presented. The CABS model uses a knowledge-based statistical force field that consists of generic, sequenceindependent interaction terms that favor protein-like conformations, and sequence-dependent interaction terms that determine some structural details [11,28,78]. The generic force field terms are derived from general features of polypeptide chains that result in protein-like behavior of the model chains. They account for properties of protein chains such as local stiffness, their biases toward secondary structures and packing compactness. The residue-residue interaction terms are derived from contact geometry statistics derived from folded globular proteins (illustrated in Figure 2a). Nevertheless, the local packing regularities in unfolded states appear to be very similar to that observed in native structures [11,28,33]. Thereby, CABS simulations provided correct pictures of protein folding [49,[52][53][54][55][56]60] and flexibility of globular proteins [70,71].
The resulting force field takes a form of a precomputed matrix of contact pseudo-energies, presented schematically in Figure 2,b. Additionally, to allow successful modeling of membrane proteins the CABS force field can be extended by introducing effective dielectric constant terms [79].
The main difference between CABS and other statistical force fields used in CG models of similar resolution [11] is the context and orientation dependence of side chain interaction pseudo-energy that encodes characteristic patterns observed in globular proteins. For instance, the oppositely charged side chains in single globules mostly contact in an almost parallel fashion (usually on the surface of a globule), while the antiparallel contacts (usually in the buried regions of the protein globule) are very rare. Therefore, in the context dependent force field these antiparallel contacts of oppositely charged residues are treated as repulsive. This way, the CABS force field implicitly incorporates information on the complicated interaction patterns with the solvent (via contact statistics) and its entropic contribution to system thermodynamics [11,28]. shows an example matrix of contact energies which depend on the geometry of the contacting pair, main chain geometry (compact (C) or extended (E)) for both amino acids (left part of the panel), and also on the amino acid identities (right part of the panel, the amino acids are represented using the one-letter code). The PCC matrix is presented which shows interaction energies between residues being in parallel orientation (P), where one residue belongs to a compact type of structure (C) and the second one as well (C).
Using the mean-force force field derived from folded proteins to simulations of less-structured systems raises justified questions about the validity of this approach in studies of the disordered protein regions. The folding events observed in simulations performed using the CABS force field are consistent with both the experimental data and all-atom MD simulations [49,52,80,81]. Thus, it is hypothesized that unstructured (unfolded, partially unfolded or intrinsically disordered) proteins to a significant extent share similar stabilizing interaction patterns with the patterns observed for their well-structured counterparts [82,83].
The CABS method uses the MC asymmetric Metropolis sampling scheme that governs a set of local motions as well as multi-residue, small distance moves of the C-alpha atoms (see Figure 3). The method uses a replica exchange algorithm with simulated annealing to enhance the sampling of conformational states. The simulation is organized as a set of nested loops, in which the s number of MC steps are organized into the y number of MC cycles, and these in the a number of annealing cycles. Each of the MC steps consists of a per-set number of attempts to perform each of the five standard precomputed moves. The available motions and the details of implementation of the sampling scheme are presented in Figure 3. shows an example matrix of contact energies which depend on the geometry of the contacting pair, main chain geometry (compact (C) or extended (E)) for both amino acids (left part of the panel), and also on the amino acid identities (right part of the panel, the amino acids are represented using the one-letter code). The PCC matrix is presented which shows interaction energies between residues being in parallel orientation (P), where one residue belongs to a compact type of structure (C) and the second one as well (C).
The main difference between CABS and other statistical force fields used in CG models of similar resolution [11] is the context and orientation dependence of side chain interaction pseudo-energy that encodes characteristic patterns observed in globular proteins. For instance, the oppositely charged side chains in single globules mostly contact in an almost parallel fashion (usually on the surface of a globule), while the antiparallel contacts (usually in the buried regions of the protein globule) are very rare. Therefore, in the context dependent force field these antiparallel contacts of oppositely charged residues are treated as repulsive. This way, the CABS force field implicitly incorporates information on the complicated interaction patterns with the solvent (via contact statistics) and its entropic contribution to system thermodynamics [11,28].
Using the mean-force force field derived from folded proteins to simulations of less-structured systems raises justified questions about the validity of this approach in studies of the disordered protein regions. The folding events observed in simulations performed using the CABS force field are consistent with both the experimental data and all-atom MD simulations [49,52,80,81]. Thus, it is hypothesized that unstructured (unfolded, partially unfolded or intrinsically disordered) proteins to a significant extent share similar stabilizing interaction patterns with the patterns observed for their well-structured counterparts [82,83].
The CABS method uses the MC asymmetric Metropolis sampling scheme that governs a set of local motions as well as multi-residue, small distance moves of the C-alpha atoms (see Figure 3). The method uses a replica exchange algorithm with simulated annealing to enhance the sampling of conformational states. The simulation is organized as a set of nested loops, in which the s number of MC steps are organized into the y number of MC cycles, and these in the a number of annealing cycles. Each of the MC steps consists of a per-set number of attempts to perform each of the five standard precomputed moves. The available motions and the details of implementation of the sampling scheme are presented in Figure 3. The combination of the key features of CABS-its representation, force field and the scale of the movements used in the MC scheme-makes it suitable for the investigation of protein pseudodynamics. As mentioned above, the fine-grained lattice improves sampling efficiency, achieving effective timescales of milliseconds. As compared with MD, this is a considerably broader time range (in the study of flexibility of folded proteins [71] the CABS dynamics was estimated to be around 6 × 10 3 cheaper in terms of computational cost than the classical MD). The chosen micro-motions allow (via accumulation over simulation steps) cooperative, large-scale motions. The ensemble of structures produced by the CABS method resembles a dynamic ensemble averaged over the effective timescale. Due to the nature of the method, the picture of local dynamics is distorted (on the level of local moves); however, it may be argued (based on the works mentioned above that compared our simulations with experimental data) that the long-time pseudo-dynamics recovers the realistic picture of protein motions averaged over time.
The timescale of the CABS simulations is not a priori defined and depends on the CABS simulation temperature, due to hidden entropic contributions in the force field, accounting for implicit solvent effects and multi-body interactions encoded in the statistical force field. Nevertheless, the effective timescale of MC dynamics can be approximately identified by comparison with MD trajectories from sufficiently long simulations. This comparison was thoroughly discussed previously, and the results were compared to MD results [69] and NMR ensembles [71].
The CABS model is presently used as a simulation engine of a few multiscale modeling tools that merge CABS with models reconstruction to all-atom resolution. Those include the CABS-dock method for flexible protein-peptide docking (available as a web server [62] at http://biocomp.chem.uw.edu.pl/CABSdock and a standalone application [84] at The combination of the key features of CABS-its representation, force field and the scale of the movements used in the MC scheme-makes it suitable for the investigation of protein pseudo-dynamics. As mentioned above, the fine-grained lattice improves sampling efficiency, achieving effective timescales of milliseconds. As compared with MD, this is a considerably broader time range (in the study of flexibility of folded proteins [71] the CABS dynamics was estimated to be around 6 × 10 3 cheaper in terms of computational cost than the classical MD). The chosen micro-motions allow (via accumulation over simulation steps) cooperative, large-scale motions. The ensemble of structures produced by the CABS method resembles a dynamic ensemble averaged over the effective timescale. Due to the nature of the method, the picture of local dynamics is distorted (on the level of local moves); however, it may be argued (based on the works mentioned above that compared our simulations with experimental data) that the long-time pseudo-dynamics recovers the realistic picture of protein motions averaged over time.
The timescale of the CABS simulations is not a priori defined and depends on the CABS simulation temperature, due to hidden entropic contributions in the force field, accounting for implicit solvent effects and multi-body interactions encoded in the statistical force field. Nevertheless, the effective timescale of MC dynamics can be approximately identified by comparison with MD trajectories from sufficiently long simulations. This comparison was thoroughly discussed previously, and the results were compared to MD results [69] and NMR ensembles [71].
The CABS model is presently used as a simulation engine of a few multiscale modeling tools that merge CABS with models reconstruction to all-atom resolution. Those include the CABS-dock method for flexible protein-peptide docking (available as a web server [62] at http://biocomp.chem. uw.edu.pl/CABSdock and a standalone application [84] at https://bitbucket.org/lcbio/cabsdock/) (accessed on 30 January 2019). In comparison to other protein-peptide docking tools, reviewed recently [85], CABS-dock offers a unique opportunity for modeling large-scale rearrangements of protein receptor structure during on-the-fly docking of fully flexible peptides. Another CABS-based tool, CABS-flex, enables fast simulations of protein flexibility (available as a web server [73] at http: //biocomp.chem.uw.edu.pl/CABSflex and a standalone application [72] at https://bitbucket.org/ lcbio/cabsflex/, accessed on 30 January 2019). This approach has been also incorporated as the module in the Aggrescan3D method for prediction of protein aggregation properties (available as a web server [86] at http://biocomp.chem.uw.edu.pl/A3D and a standalone application at https://bitbucket. org/lcbio/aggrescan3D, accessed on 30 January 2019). By using CABS-flex predictions, Aggrescan3D enables predicting the impact of protein conformational fluctuations on aggregation properties. Finally, the CABS model is used in the CABS-fold method for protein structure prediction: in the de novo fashion (from an amino acid sequence only), guided by user-provided templates or user-provided distance restraints (available as a web server [58] at http://biocomp.chem.uw.edu.pl/CABSfold/, accessed on 30 January 2019). The access to CABS-based tools, together with the tools description, is also available from websites of the laboratories: http://biocomp.chem.uw.edu.pl/ and http://lcbio.pl/ (accessed on 30 January 2019).
CABS Applications to Simulation of Disordered or Unfolded Proteins
In this section, we review CABS applications to simulations of protein-peptide binding (Section 3.1) and folding of globular proteins (Section 3.2). We briefly discuss modeling results for the binding of three protein-peptide systems and protein folding of one protein system. Figure 4 shows native conformations of these systems determined by X-ray crystallography or NMR. In the figure, they are arranged according to the size of a fully flexible fragment of the modeled system, effective timescales required for a meaningful simulation of their motions, and thus the modeling difficulty: (1) modeling of FxxLF motif peptide docking to an androgen receptor (AR), (2) investigation of binding and folding of an unstructured pKID protein to KIX protein, (3) modeling of p53-derived peptide docking to the MDM2 protein receptor with partially unstructured regions, and (4) simulation of the de novo folding of barnase. The simulations were performed using the CABS-dock method for protein-peptide docking [62] and CABS-flex methodology [72,73] [85], CABS-dock offers a unique opportunity for modeling large-scale rearrangements of protein receptor structure during on-the-fly docking of fully flexible peptides. Another CABS-based tool, CABS-flex, enables fast simulations of protein flexibility (available as a web server [73] at http://biocomp.chem.uw.edu.pl/CABSflex and a standalone application [72] at https://bitbucket.org/lcbio/cabsflex/, accessed on 30 January 2019). This approach has been also incorporated as the module in the Aggrescan3D method for prediction of protein aggregation properties (available as a web server [86] at http://biocomp.chem.uw.edu.pl/A3D and a standalone application at https://bitbucket.org/lcbio/aggrescan3D, accessed on 30 January 2019). By using CABS-flex predictions, Aggrescan3D enables predicting the impact of protein conformational fluctuations on aggregation properties. Finally, the CABS model is used in the CABS-fold method for protein structure prediction: in the de novo fashion (from an amino acid sequence only), guided by user-provided templates or user-provided distance restraints (available as a web server [58] at http://biocomp.chem.uw.edu.pl/CABSfold/, accessed on 30 January 2019). The access to CABS-based tools, together with the tools description, is also available from websites of the laboratories: http://biocomp.chem.uw.edu.pl/ and http://lcbio.pl/ (accessed on 30 January 2019).
CABS Applications to Simulation of Disordered or Unfolded Proteins
In this section, we review CABS applications to simulations of protein-peptide binding (Section 3.1) and folding of globular proteins (Section 3.2). We briefly discuss modeling results for the binding of three protein-peptide systems and protein folding of one protein system. Figure 4 shows native conformations of these systems determined by X-ray crystallography or NMR. In the figure, they are arranged according to the size of a fully flexible fragment of the modeled system, effective timescales required for a meaningful simulation of their motions, and thus the modeling difficulty: (1) modeling of FxxLF motif peptide docking to an androgen receptor (AR), (2) investigation of binding and folding of an unstructured pKID protein to KIX protein, (3) modeling of p53-derived peptide docking to the MDM2 protein receptor with partially unstructured regions, and (4) simulation of the de novo folding of barnase. The simulations were performed using the CABS-dock method for protein-peptide docking [62] and CABS-flex methodology [72,73] that enable running de novo folding simulations.
Protein-Peptide Binding
The CABS-dock method has been extensively tested using the PeptiDB benchmark set of protein-peptide complexes [62,65,87]. One of the benchmark cases is the androgen receptor ligand binding domain (AR) in complex with a peptide with the FxxLF motif [88] (PDB code: 1T7R). To further analyze the interaction details of this complex, we performed blind global docking (using no knowledge about the binding site and peptide conformation) using CABS-dock [62]. As the input we used information on peptide sequence (incorporating the FxxLF motif: SSRFESLFAGEKESR), peptide secondary structure information assigned by the DSSP method [89] and the structure of the AR protein receptor. In this docking study, the peptide structure was simulated as fully flexible, while fluctuations of the protein receptor were limited to small backbone movements around the input structure (around 1 Å). The docking simulation started from random peptide conformations placed in random positions around the receptor structure. During simulation, the peptide remained unstructured until it was bound to the receptor binding site (Figure 5a). The docking simulations provided a set of high-quality models-the best model was characterized by a peptide-RMSD (root-mean-square deviation) value of 1.97 Å-and contact maps in strong agreement with the experimental data. As expected from the experimentally obtained structures and sequence analysis [88] the FxxLF interaction motif residues were most frequently involved in stabilizing hydrophobic interactions with the receptor. These high-frequency contacts are clearly visible in Figure 5a.
The study of the pKID/KIX system [63] involved performing a folding simulation of an intrinsically disordered protein (pKID) and its binding to a well-structured KIX receptor (Figure 5b). According to the experimental studies, the pKID structure is disordered in its unbound form with a slight propensity toward a helix (for detailed description on how one-dimensional secondary structure information is used in the CABS model see [78]). In the complex with the KIX protein, pKID adopts a characteristic conformation of two perpendicular helices that wrap around the receptor. However, most simulation results for the coupled folding and binding of this system published prior to the CABS-based study used models which biased pKID toward its native conformation (see the discussion in [63]). Using our method for studying this system enabled fully flexible treatment of the pKID protein.
The obtained results [63] suggested the binding mechanism that involve two encounter complexes and were in well agreement with the available NMR experimental data. The predicted models presented high fractions of native contacts and allowed identification of residues essential for the binding and stabilization of the complex.
In the simulation of MDM2/p53 binding [64], the most challenging task was to adequately model the flexibility of the relatively long, unstructured regions of the protein receptor in addition to the fully flexible peptide [64,90] (Figure 5c). To provide a detailed insight into MDM2/p53 binding, we performed CABS-dock simulations and captured system behavior in agreement with the experimental data [64]. During the simulation, the flexible N-and C-terminal MDM2 fragments remained significantly disordered. The best resulting model was characterized by a peptide-RMSD value of 2.76 Å and 54% of the native contacts while the top ranked model by 3.74 Å and 60%, respectively. During simulations, we observed ensembles of models in which the peptide adopted different conformations loosely bound to the binding site and models in which the N-terminal highly flexible MDM2 fragment was interacting with the binding site. These findings are in agreement with the experimental data suggesting that p53-MDM2 binding is affected by significant rearrangements of the N-terminal MDM2 fragment (see discussion in [64]).
in random positions around the receptor structure. During simulation, the peptide remained unstructured until it was bound to the receptor binding site (Figure 5a). The docking simulations provided a set of high-quality models-the best model was characterized by a peptide-RMSD (rootmean-square deviation) value of 1.97 Å-and contact maps in strong agreement with the experimental data. As expected from the experimentally obtained structures and sequence analysis [88] the FxxLF interaction motif residues were most frequently involved in stabilizing hydrophobic interactions with the receptor. These high-frequency contacts are clearly visible in Figure 5a. [63]; the map presents the frequency of contacts of near-native conformations obtained in the simulation. (c) Modeling of p53 peptide binding to the MDM2 receptor [64], which includes fully-flexible regions of the protein receptor (shown in cyan) interacting with a fully-flexible peptide (shown in red). (d) Modeling of barnase folding [52] in the de novo fashion (using no knowledge about the structure); the map is a residue-residue contact map showing relative contact frequencies in denaturing conditions; the protein fragments that form the folding nucleation site are colored in cyan in the presented folded structure of barnase.
Folding and Flexibility of Globular Proteins
The CABS model has been applied to de novo simulations of protein folding (using no knowledge about the protein structure) for several model systems that have been extensively studied by experiment and simulation tools. Those studies include barnase [50,52], chymotrypsin inhibitor [50,52], B1 domain of protein G [49,50], B domain of protein A [53], and others [50,54]. The CABS modeling protocol was also extended to enable studies of the chaperonin effect on the folding mechanism [55]. In these works, various parameters have been studied, including residue-residue contact frequency, radius of gyration, residual secondary structure and others. The obtained pictures, which covered protein dynamics from highly denatured states to ensembles close to the folded states, agreed well with available experimental data.
For example, simulation of barnase folding resulted in the adequate reproduction of the folding pathway in strong agreement with NMR data for denatured states and phi-value analysis [52]. The performed simulations show that barnase folding starts with developing a folding nucleation site that consists of protein fragments corresponding to two strands of a beta sheet and one of the helices in the folded structure (presented in Figure 5d). In addition, the characteristic patterns of hydrophobic interactions that are crucial for the initiation and sustenance of folding are in agreement with the experimental data (see discussion in Reference [52], the contact map resulting from these simulations is presented in Figure 5d).
Conclusions
The presented case studies review the applications of the CABS model in simulations of disordered or unfolded protein states. As discussed, the method succeeded in capturing the experimentally determined features of the investigated systems, such as binding site localization, key contacts, peptide hot-spot areas, distinctive conformational states of the system, transient encounter complexes and intermediate states in protein folding [49,52,63,64]. Additionally, CABS enables an investigation of fluctuations of globular proteins around the native (input) structure [69][70][71][72][73].
There is a number of tools commonly used for sampling of disordered protein states, which predictions agree with the experimental studies [91][92][93][94][95]. The CABS method is complementary to these and provides a unique approach allowing for effective modeling both ordered and disordered elements of the system. As observed in many previous studies, these features of CABS method allow for providing accurate pictures of folding pathways [49,[52][53][54][55][56]60] and near-native dynamics [70,71]. Obviously, due to its coarse-graining, the geometric details are missed, and their reconstructions is approximate [11,28]. The main distinctive feature of CABS method as compared to the available tools is that the ensemble generation is (pseudo-)energy driven and thus may provide some information on the dynamics on the system. This is not the case in the above-mentioned examples of methods based on random-walk [91,92,95].
On the other hand, CABS force field side-chain interactions escape a clear interpretation, which may be a disadvantage compared to physics-based approaches that allow for straightforward and detailed description of each of the terms [93,94].
It is, however, noteworthy that statistical force fields suffer from inherent limitations, depending on the chosen method of derivation. The most commonly discussed challenges include the transferability, solvent interactions and integration of experimental data. Here, we briefly summarize these topics, a detailed discussion of the limitations of this approach, and possible workarounds may be found in review works [11,17]. The transferability of statistical force fields may be limited as they are applicable always to a certain subset of proteins. Therefore, the performance of knowledge-based approaches may be poor for rare or atypical structures, for which appropriate statistics of contact patterns could not be collected. It should also be noted that interactions with solvent are averaged and treated implicitly, which may lead to significant discrepancies if the method is applied to non-standard solvent conditions (such as extreme pH values). The CABS force field is derived assuming averaged effect solvent conditions for folded globular proteins. Therefore, a subtle effect of small molecules, such as pH, cannot be simulated in a strict fashion, although averaged effects (see modeling the chaperonin effect [55]) can be approximately taken into considerations.
One of the most challenging tasks in modeling protein systems is the effective incorporation of sparse experimental data to drive the modeling procedure. In the CABS model, the experimental data may be readily introduced into the simulation as geometry distance restraints and weighted according to their certainty. A thorough discussion of this possibility is presented in the documentation of CABS-based tools for the fast modeling of protein flexibility and protein-peptide docking [66,72,73]. On a similar basis, CABS simulations can be guided by computational predictions from other sources or integrated with other modeling tools of various resolution. Therefore, the CABS model can be incorporated into integrative modeling pipelines that would benefit from its effective sampling scheme. The recently published standalone application and web server tools are available for integration with external pipelines (access links are presented in the last paragraph of Section 2).
Author Contributions: S.K. and A.K. conceptualized this review. M.P. performed the simulations and analyzed the results for the AR/FxxLF system. The review was written by M.P.C., A.E.B-D., A.K. and S.K.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,092.4 | 2018-12-17T00:00:00.000 | [
"Biology"
] |
Transcriptomic Analysis of Human Keratinocytes Treated with Galactomyces Ferment Filtrate, a Beneficial Cosmetic Ingredient
Galactomyces ferment filtrate (GFF, Pitera™) is a cosmetic ingredient known to have multiple skin care benefits, such as reducing redness and pore size via the topical application of its moisturizer form. Although GFF is known to act partly as an antioxidative agonist for the aryl hydrocarbon receptor (AHR), its significance in keratinocyte biology is not fully understood. In this study, we conducted a transcriptomic analysis of GFF-treated human keratinocytes. Three different lots of GFF consistently modulated 99 (22 upregulated and 77 downregulated) genes, including upregulating cytochrome P450 1A1 (CYP1A1), a specific downstream gene for AHR activation. GFF also enhanced the expression of epidermal differentiation/barrier-related genes, such as small proline-rich proteins 1A and 1B (SPRR1A and SPRR1B), as well as wound healing-related genes such as serpin B2 (SERPINB2). Genes encoding components of tight junctions claudin-1 (CLDN1) and claudin-4 (CLDN4) were also target genes upregulated in the GFF-treated keratinocytes. In contrast, the three lots of GFF consistently downregulated the expression of inflammation-related genes such as chemokine (C-X-C motif) ligand 14 (CXCL14) and interleukin-6 receptor (IL6R). These results highlight the beneficial properties of GFF in maintaining keratinocyte homeostasis.
Introduction
The maintenance of epidermal homeostasis and structural function is critical for healthy and stress-tolerant skin with a youthful appearance. Facial appearance is an important issue, not only in the elderly, but also in young women [1,2]. Previous studies revealed that skin moisturization is beneficial for keeping a youthful facial appearance [2,3]. A skincare formula containing Galactomyces ferment filtrate (GFF, Pitera™) is a functional moisturizing agent, because its topical application was shown to significantly reduce facial erythema, roughness, and pore dilation in two independent clinical trials [2]. In addition, the GFF-containing skincare formula ameliorated the mask-induced exacerbation of facial pore dilation and redness [4].
GFF upregulates the expression of epidermal differentiation complex genes [5] located on chromosome 1q21 [6]. It also ameliorates oxidative stress triggered by various stimuli via the activation of the antioxidative system in keratinocytes [7][8][9][10]. In addition, GFF is known to exert its functional activity, at least in part, as an agonist for the aryl hydrocarbon receptor (AHR) [5,8,9]. However, the detailed activity of GFF on human epidermal keratinocytes remains largely unknown.
Cell Treatment and Sample Preparation for Microarray Analysis
The keratinocyte cell line tKC (tert keratinocytes) was a kind gift from Dr. Jerry W. Shay (University of Texas Southwestern, Dallas, TX, USA) [21,22]. tKC cells were plated at a density of 100,000 cells/well into 12-well plates (Corning BioCoat, REF 354500; Corning Inc., Corning, NY, USA). After growth for 24 h at 37 • C in a CO 2 incubator, the tKCs were treated with a medium control (10% water) or GFF (10%; P&G Innovation GK, Kobe, Japan) for 24 h before harvesting for microarray analysis. We used three different lots of GFF in this study. Transcriptomic analysis was performed as reported previously [22]. Briefly, samples were collected in RNAlater®buffer, flash-frozen, and stored at −80 • C prior to RNA extraction. RNA was extracted and purified using the RNeasy kit (QIAGEN, Germantown, MD). Purified RNA was converted to biotin-labeled complementary RNA copies using the HT 3 IVT Plus kit (Affymetrix, Santa Clara, CA, USA), as per the manufacturer's protocol. Biotinylated cRNA was fragmented using limited alkaline hydrolysis and then hybridized overnight to Affymetrix GeneTitan U219 array plates using the Affymetrix GeneTitan instrument and protocol (ThermoFisher Scientific, Waltham, MA, USA).
Statistical Analysis of Microarray Data
Probe set expression values were calculated with quartile normalization and PLIER summarization algorithms. Differentially expressed genes were analyzed using the empirical Bayes method implemented in the R limma package [23]. p values less than 0.001 were considered statistically significant. An increase in gene expression of more than 1.5-fold or a decrease to less than 0.75-fold compared with that of the control treatment was defined as meaningful transcriptomic modification.
Results
To confirm the functional consistency of GFF, we treated human keratinocytes with three different GFF lots. GFF is known as an AHR agonist [5]. The activation of AHR upregulates the expression of its specific downstream gene CYP1A1 [5]. AHR activation also upregulates the expression of epidermal differentiation complex genes such as SPRR1A and SPRR1B [11]. In parallel with the findings in previous studies [5,11], a significant and meaningful upregulation of the CYP1A1 gene was consistently observed for the three independent lots of GFF (mean fold change: 1.777) ( Table 1) compared with the control treatment level. All GFFs also significantly upregulated the expression of SPRR1A (mean fold change: 1.647) and SPRR1B genes (mean fold change: 2.231) compared with the control treatment levels (Table 1). These results suggested that GFF exerted its AHR agonist activity irrespective of the product lot. In addition to these three upregulated AHR-related genes, the three GFFs consistently up-or downregulated the expression of 96 other genes (19 upregulated and 77 downregulated) (Tables 1 and 2). Figure 1 and Table 3 list the gene ontogeny (GO) of those genes related to skin. All three GFF lots significantly increased the expression of genes encoding the tight junction proteins CLDN1 and CLDN4 (GO category: establishment of skin barrier), which were previously reported to be upregulated by GFF (Tables 1 and 3) [5,24]. In the GO category of epidermal differentiation, the expression of KRT6A, KRT6B, and KRT13 was upregulated by the three GFFs (Tables 1 and 3), while that of the stem cell marker KRT15 [25] was downregulated (Tables 2 and 3). As for KRT6A and KRT6B, the GFFs consistently upregulated the expression of KRT16, which is an alternative differentiation marker of epidermal keratinocytes [14] (Tables 1 and 3). In the GO category of cell aging, the expression of calreticulin (CALR) was downregulated, whereas that of PLK2 was upregulated (Tables 1 and 3). Meanwhile, the expression of the genes COL7A1, DLL1, and WNT10A (GO category: epidermis development); DDX60, DHX58, and PSMB9 (positive regulation of defense response); DNAJB9, HERPUD1, HSP90B1, SDF2L1, and SEL1L (proteasomal protein catabolic process); GJB2, HEG1, and MICALL2 (cell-cell junction assembly); SYT8 (cellular response to calcium ion); and DST (microtubule-based movement) was consistently decreased by GFF (Tables 2 and 3). In contrast, the expression of the genes GAL (epidermis development) and SERPINB2 (regulation of wound healing) was augmented by GFF. In addition to SERPINB2, GFF also upregulated the expression of the SERPINB1 and SERPINB7 genes (Tables 1 and 3). However, the biological significance of the modified expression of these genes in keratinocytes remains obscure.
The gene expression of the secretory leukocyte peptidase inhibitor (SLPI) [15] is also known to be related to epidermal differentiation. Similar to the abovementioned epidermal differentiation genes, GFF significantly upregulated the expression of the SLPI gene (Table 1). In contrast, the expression of the epidermal proliferation-related gene fibroblast growth factor receptor 3 (FGFR3) [26,27] was downregulated by GFF (Table 3). In addition, GFF was likely to ameliorate the inflammatory process, because it strongly inhibited the expression of the CXCL14 [16,17], IL6R [18,19], and CALR [20] genes (Table 3). Representative genes for which the expression was modified in the GFF-treated keratinocytes are depicted in Figure 2.
DDX60, DHX58, and PSMB9 (positive regulation of defense response); DNAJB9, HER-PUD1, HSP90B1, SDF2L1, and SEL1L (proteasomal protein catabolic process); GJB2, HEG1, and MICALL2 (cell-cell junction assembly); SYT8 (cellular response to calcium ion); and DST (microtubule-based movement) was consistently decreased by GFF (Tables 2 and 3). In contrast, the expression of the genes GAL (epidermis development) and SERPINB2 (regulation of wound healing) was augmented by GFF. In addition to SERPINB2, GFF also upregulated the expression of the SERPINB1 and SERPINB7 genes (Tables 1 and 3). However, the biological significance of the modified expression of these genes in keratinocytes remains obscure.
The gene expression of the secretory leukocyte peptidase inhibitor (SLPI) [15] is also known to be related to epidermal differentiation. Similar to the abovementioned epidermal differentiation genes, GFF significantly upregulated the expression of the SLPI gene (Table 1). In contrast, the expression of the epidermal proliferation-related gene fibroblast growth factor receptor 3 (FGFR3) [26,27] was downregulated by GFF (Table 3). In addition, GFF was likely to ameliorate the inflammatory process, because it strongly inhibited the expression of the CXCL14 [16,17], IL6R [18,19], and CALR [20] genes (Table 3). Representative genes for which the expression was modified in the GFF-treated keratinocytes are depicted in Figure 2. Finally, when we set a less stringent threshold for defining significantly modulated genes to an increase in expression of more than 1.2-fold or a decrease to less than 0.8-fold compared with the control, 175 upregulated and 20 downregulated genes were added as target genes modulated by GFF, including S100A8, S100A9, and OVOL1 (Supplementary Tables S1 and S2). Notably, the expression of these three genes is known to be upregulated by AHR activation [28][29][30].
Discussion
The GFF-formulated moisturizing product is a popular skincare product used widely around the world. Two independent clinical trials have shown that its daily application for 4 weeks significantly attenuated not only the intensity, but also the fluctuation of facial redness, roughness, and pore dilation [2]. Topical GFF also stabilized the mask-induced exacerbation of fluctuations in facial redness and pore dilation [4]. The clinical efficacy of GFF may be partly attributable to the fact that it works as an antioxidative AHR agonist [5,[7][8][9][10]. However, the molecular effects of GFF on keratinocytes are not fully understood. Finally, when we set a less stringent threshold for defining significantly modulated genes to an increase in expression of more than 1.2-fold or a decrease to less than 0.8-fold compared with the control, 175 upregulated and 20 downregulated genes were added as target genes modulated by GFF, including S100A8, S100A9, and OVOL1 (Supplementary Tables S1 and S2). Notably, the expression of these three genes is known to be upregulated by AHR activation [28][29][30].
Discussion
The GFF-formulated moisturizing product is a popular skincare product used widely around the world. Two independent clinical trials have shown that its daily application for 4 weeks significantly attenuated not only the intensity, but also the fluctuation of facial redness, roughness, and pore dilation [2]. Topical GFF also stabilized the maskinduced exacerbation of fluctuations in facial redness and pore dilation [4]. The clinical efficacy of GFF may be partly attributable to the fact that it works as an antioxidative AHR agonist [5,[7][8][9][10]. However, the molecular effects of GFF on keratinocytes are not fully understood.
In the present study, we performed the transcriptomic analysis of human keratinocytes treated with three different lots of GFF. In accordance with previous studies [5,11], all three GFFs significantly upregulated the expression of CYP1A1, SPRR1A, and SPRR1B, which are known downstream genes of AHR activation. These results confirmed the AHR agonist activity of GFF, irrespective of the product lot. CYP1A1 may be useful for degrading environmental pollutants [31], while SPRR1A and SPRR1B are important epidermal barrier molecules [6]. In parallel, GFF upregulated the expression of other AHR-mediated genes, such as S100A8, S100A9, and OVOL1. S100A8 and S100A9 form a heterodimer called calprotectin, which works as a keratinocyte alarmin molecule [32]. OVOL1 is a transcription factor essentially involved in the induction of barrier-related proteins [29,30]. In addition, all three GFF lots in this study consistently upregulated the expression of CLDN1 and CLDN4, as reported previously [5,24]. These results suggested that GFF may enhance or accelerate barrier formation (SPRR1A and SPRR1B) and tight junction formation (CLDN1 and CLDN4).
The accelerating activity of GFF on epidermal differentiation or barrier formation can be further highlighted by the fact that it also upregulated the expression of SLPI. SLPI expression is reported to be upregulated in the cornified layer by antioxidative signaling and is related to the desquamation process [15]. Notably, SLPI is also known as an endogenous ligand for the annexin A2 heterotetramer, which serves as an uptake receptor for human papilloma virus in keratinocytes [33]. The blocking of the annexin A2 heterotetramer by SLPI inhibits the human papilloma virus infection [33]. In contrast to the differentiation-prone gene response, GFF is likely to inhibit the proliferation of keratinocytes via the downregulation of FGFR3 expression. FGFR3 plays a crucial role in keratinocyte proliferation, because the gain-of-function mutation of FGFR3 causes the development of epidermal nevi [26,27].
Various chemical and mechanical injuries induce the expression of the alternative differentiation keratin pair KRT6/KRT16 [14,34,35]. Recent studies have revealed that KRT6 and KRT16 act as key early barrier alarmins and upregulate the stress response and innate immunity [34,35]. The present study clearly demonstrated that GFF was a potent inducer of KRT6/KRT16 barrier alarmins. In contrast, KRT15 is recognized as a useful marker of epidermal keratinocytic stem cells [25]. Notably, in the present study, GFF significantly and potently downregulated KRT15 expression. We speculated that GFF may accelerate epidermal keratinocyte differentiation partly through enhancing the exit from stemness by downregulating KRT15.
The GFF-mediated downregulation of CXCL14, IL6R, and CALR may underscore the immunoregulatory function of GFF. CXCL14 is a potent chemoattractant of immune cells, especially monocytes and dendritic cells [16,17]. The proinflammatory cytokine IL-6 is produced in keratinocytes facing barrier disruption or chemicals [18,19], and is related to eczematous dermatitis [20]. Meanwhile, CALR has recently been recognized as an inducer of immunogenic cell death [36] and is critically involved in programmed cell removal by macrophages [37,38]. GFF (Pitera™) is a quality-assured, filtrated material derived from Galactomyces fermentation. It consists of over 50 components, including minerals, vitamins, amino acids, and organic acids. As shown in the present study, three different lots of GFF consistently revealed similar transcriptomic effects on human keratinocytes. There were several limitations to this study. First, no proteomic analysis was performed here, so this needs to be carried out in future work to confirm the present transcriptomic results. As GFF is a mixture of active substances derived from Galactomyces, it is not surprising that it acts on many different targets additively or synergistically. Second, although a meaningful transcriptomic alteration was identified in 99 genes, the roles of most of these genes in keratinocyte biology are not fully understood. Therefore, further studies are warranted to reveal the implications of these genes in keratinocyte homeostasis. Third, the dependence of the 99 genes on AHR remains largely unknown. For example, as mentioned above, AHR regulates the gene expression of CYP1A1, SPRR1A, and SPRR1B [5,11], but that of CLDN1 and CLDN4 is not dependent on AHR activation [5].
In conclusion, this study showed that GFF is a biologically active cosmetic ingredient that serves as an AHR agonist irrespective of the particular product lot. GFF appeared to increase the expression of differentiation/barrier-related genes (SPRR1A, SPRR1B, CLDN1, CLDN4, and SLPI), but decreased that of a proliferation-related gene (FGFR3) in keratinocytes. It also upregulated the barrier alarmin genes (KRT6 and KRT16), while downregulating a stemness gene (KRT15). In addition, GFF likely ameliorated the inflammatory process by downregulating the expression of the CXCL14, IL6R, and CALR genes. The coordinated regulation of these genes may underpin the beneficial activity of GFF in maintaining healthy skin. | 3,460.8 | 2022-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The complex interplay between sectoral energy consumption and economic growth: Policy implications for Iran and beyond
Iran's abundant energy reserves starkly contrast with recent power and gas shortages, particularly impacting the industrial sector. Furthermore, long-term trends reveal a concerning pattern where total primary energy consumption has outpaced economic growth, doubling in recent decades. These challenges emphasize the need for a thorough evaluation of the intricate interplay between sectoral energy consumption and economic output in Iran, bearing profound policy implications. The current study employs ARDL and VECM approaches to analyze empirical long- and short-term dynamics. Regarding Iran, the results unveil causal relationships from industrial energy consumption to GDP and from GDP to energy consumption in buildings. Notably the significant positive value of elasticity of GDP with respect to industrial energy use highlights the need for nuanced energy management measures. Variations across sectors underscore the justification for recognizing industrial energy consumption as productive energy use. The results gain additional support from a panel data analysis spanning fourteen diverse countries, bearing significance for IAMs applied in climate change research. While IAMs traditionally employ total energy consumption or the sectoral energy uses collectively, as production factors, the research highlights the need to reevaluate model frameworks for potential different outcomes from established practices.
Introduction
Iran, despite its abundant natural gas and other energy resources, grapples persistently with excessive energy consumption due to glaring inefficiencies.The winter of 2023 stands as a stark reminder, as authorities in several regions were compelled to close schools and government offices for extended periods to conserve natural gas.Similarly, prioritizing residential household electricity over industrial needs during scorching summer weeks disrupted the operational continuity of manufacturing firms.
These immediate energy shortage concerns are compounded by long-term systemic challenges in the realm of energy intensity and climate change.Iran finds itself among the top 10 nations worldwide in terms of greenhouse gas (GHG) emissions [1], raising significant environmental and climate change concerns.This is particularly concerning given the global trend of diminishing energy intensity and the emergence of decoupling between economic growth and energy consumption, in many cases, in both developed and, to varying extents, developing countries over recent years.However, Iran exhibits a contrary trajectory (Fig. 1), where energy intensity has been on the rise.An in-depth examination of official statistics spanning various timeframes, including the past 10, 30, and 50 years, unveils a striking pattern: the total primary energy consumption in Iran has grown at a rate twice as swift as the economic output [2].The confluence of immediate energy supply challenges and systemic complexities within Iran's energy landscape serves as the impetus driving the present study, which aims to comprehensively analyze the nuanced relationship between energy consumption and economic performance [3].The study endeavors to dissect the intricate relationship between energy consumption and economic output within Iran, utilizing the autoregressive distributed lag (ARDL) and vector error correction model (VECM) approaches within a multivariate framework, thereby offering an empirical lens to explore these pressing concerns.
In addition to offering policy recommendations on sustaining the development of the energy system in Iran, the research outcomes hold the potential to reevaluate the core formulations of the dynamics between energy use and economic output within the integrated assessment modeling studies.Nowadays, integrated assessment models are extensively used as crucial tools for gauging the technological and economic viability of climate-related interventions.Prominent among these models are MARKAL/TIMES-MACRO [4] and MESSAGE-MACRO [5], which rest on the premise that factors including capital stock, labor, and energy collectively dictate overall economic output, encapsulated within a nested production function.However, it's imperative to recognize that these models traditionally treat energy consumption in non-productive sectors, such as residential buildings, as a contributing factor to economic value-added.This perspective on the role of energy use in economic production would differ from the nuanced insights gained through cointegration analysis, which delves into the intricate association between sectoral energy consumption and economic growth.The current novel study, unlike many prior studies that primarily investigate total energy consumption or energy consumption across various carriers, prompts a reevaluation of production functions and the iterative information exchange inherent in integrated modeling practices.To reinforce the robustness of the recommendation to regard specific sectoral energy use as production factor, an extensive examination is also conducted across fourteen different economies, chosen for their geographical diversity.
Also, adopting an alternative analytical approach, the conventional concept of energy is substituted with exergy, offering a unique perspective for evaluating its capacity to provide deeper insights into the underlying dynamics.Exergy, which captures the capability of a unit of energy to perform useful work, is used to determine whether it offers a more effective explanatory framework compared to traditional energy metrics.
By exploring these diverse analytical avenues, the current research extends beyond its specific focus on Iran and contributes valuable insights to the broader discourse surrounding studies at the intersection of energy, economy, and climate considerations.Since the pivotal study by J. Kraft and A. Kraft [6], numerous cointegration and causality analyses have been done to investigate how critical is the role of energy consumption in economic growth.The papers by I. Ozturk [7], J. E. Payne [8], N. Apergis and J.E. Payne [9], R. Smyth and P.K. Narayan [10], M. Azam et al. [11], B. N. Iyke [12], and Md.S. Rahman et al. [13] summarize the vast body of literature regarding the proposed countries, intended (proxy) variables, applied econometric methods, and even the results.
Table 1 offers a succinct overview of prior studies in the context of Iran.A comprehensive review of the literature underscores the absence of consistent and comparable results across these various investigations.The divergent findings documented in these studies can be attributed to a range of influential factors, including disparities in the economic development stage of the country, variations in the datasets employed, differences in model specifications, and the use of diverse econometric methodologies [7,8,10,14].
This study distinguishes itself from prior research on Iran by incorporating a comprehensive set of variables, including GDP, capital, Fig. 1.Energy intensity levels of primary energy across Iran, MENA, World, and OECD members (2000-2020) [1].
H. Ghadaksaz and Y. Saboohi labor, and trade openness, while also considering segmented energy use, in line with established theories of economic growth.The conventional approach of focusing on individual energy carriers separately or aggregated in empirical investigations may lead to distorted results, given that energy resources and carriers tend to be subject to substitution over time.Therefore, the present study deviates from the predominant approach in previous research [15][16][17][18] by not concentrating on individual or aggregated energy resources.
The subsequent sections of this paper are structured as follows: section 2 outlines the employed data and methodology, section 3 presents the empirical findings and initiates a comprehensive discussion, which also encompasses an examination of the robustness of the results.Lastly, section 4 concludes the study, summarizing the principal findings and delineating their policy implications.
The inclusion of labor and capital in the relationship between energy and economic growth is underpinned by a sound theoretical foundation.Mainstream growth models typically do not recognize energy as a fundamental factor of production, while models rooted in ecological economics emphasize energy as the primary factor, often neglecting the role of other conventional factors.However, there is a perspective that seeks to bridge these two approaches [90].Considering the objectives of the present study, a model specification rooted in this integrated approach is applied.
Table 2 outlines the model specifications typically used in similar studies.It is noteworthy that, in the past, many studies assumed exogenous technological improvements in their models.More recent research has aimed to proxy technological progress through considerations such as time trends, exports/imports, or financial development [83,91].Trade openness, in particular, has garnered attention as it fosters competition in both domestic and foreign markets, promotes efficient resource utilization, and facilitates the dissemination of knowledge and technology [92].The diffusion of technology through international trade engenders spill-over benefits, which can significantly benefit developing and less developed nations.
Consequently, the current research introduces trade openness into the model specification as a proxy for technological progress, thereby endogenizing technological advancement.Trade openness is measured as the combined value of exports and imports of goods and services, expressed as a proportion of the Gross Domestic Product.It is important to acknowledge that trade openness also reflects Aydin [28] Granger causality analysis 1975 to 2014 conservation hypothesis holds in the short-run, while the growth hypothesis is valid in the long-run S. Erdogan et al. [29] panel data analysis within a bivariate framework 1990 to 2014 nither energy consumption causes economic growth, nor economic growth causes energy consumption P.K. Narayan and R.
Smyth [30] panel causality analysis 1974 to 2002 statistically significant feedback effects between electricity consumption, exports, and GDP H. Ghadaksaz and Y. Saboohi revenue from oil exports.Additionally, in the case of Iran, trade openness may concurrently mirror the effects of economic sanctions, as the imposition of such sanctions can significantly curtail trade levels.Within the conceptual framework of the energy system, which encompasses the entire energy chain from energy resources to final energy carriers to energy services, final energy carriers are utilized through various technologies to provide energy services, also referred to as useful energy or useful work.Hence, the notion of considering useful work as a driving variable, applied in some previous studies [93], does not align well with theoretical foundations.Consequently, it is more theoretically appropriate to view final energy, in conjunction with labor, capital, and productivity, as the driving forces behind economic growth.Specifically, in the context of this research, sectoral final energy consumption is selected as the appropriate energy variable.By distinguishing between energy use in buildings and industrial energy use and employing the proposed model specification, the study aims to investigate the hypothesis that productive energy consumption, which can be substituted by capital, labor, and trade openness, contributes to value-added economic growth.To mitigate potential biases in causality analysis, earlier work by Pablo-Romero and Sanchez-Braza [94] introduced the concept of productive energy.Productive energy refers to the total final energy consumption in productive processes, while excluding other energy uses.This distinction is valuable in disentangling the economic implications of energy consumption in different contexts and sectors, offering a more nuanced perspective on its role in driving economic growth.
Data used in the present research are annual time series from 1967 to 2018, taken from the World Bank Data [1], Central Bank of Iran [95], Iran Energy Statistics Yearbook 2018 [96], and IEA [97].The proposed variables are GDP (Y) and capital stock (K), the labor force (L), aggregated and sectoral final energy consumption (E), and trade openness (TO).The availability of energy-related data constrained the starting point selection in 1967.Fig. 2 presents GDP and final energy consumption trends in Iran over the past fifty years.As reported in the Iran Energy Statistics Yearbook 2018, the share of buildings, industrial, transportation, and agriculture sectors in final energy consumption is 34.5 %, 36.8 %, 24.4 %, and 4.0 %, respectively [2].ARDL a long-run relationship and the unidirectional causality running from energy use to economic growth in China P. Sadorsky [91] Y = f(K, L, E, TR) vector error correction a causal relationship between energy and exports/imports in the long run in seven South American countries a Y, y, K, L, H, E, IE, RE, CE, TE, EX, UW, ELC, RE, RELG, NRELG, GAS, EP, TR, FD, and t refer to (real) GDP, per capita GDP, gross (fixed) capital formation/capital stock, total labor force, human capital (based on years of schooling), energy consumption, industrial energy consumption, residential energy consumption, commercial energy consumption, transportation energy consumption, total exergy consumption, total useful work consumption, electricity consumption, renewable energy consumption, renewable electricity generation, non-renewable electricity generation, natural gas consumption, energy price, trade, financial development, and period, respectively.
Methodology
The empirical examination of the relationship between energy use and economic production underscores intricate dynamics and the necessity for robust analytical methods to accurately disentangle these relationships.Traditional linear regression or correlation methods are inadequate for establishing causal relations among variables trending over time, as shared directionality can lead to spurious correlations.Instead, causality tests and cointegration analysis are essential tools in energy economics research, allowing for rigorous examination of causal relationships and long-term equilibrium dynamics [98].
The empirical model used to explore the long-term relationship between energy consumption and economic output is presented as follows: Here, lnY t , lnK t , lnL t , and lnTO t are the natural logarithm of GDP, capital stock, labor force, and trade openness, respectively.As well, lnE t represents aggregated or sectoral energy consumption.The use of natural logarithms for these variables is a statistical technique that helps mitigate the impact of outliers and aligns with the assumptions of inferential statistics.An alternative specification has also been developed in which energy consumption (lnE t ) is substituted with exergy consumption (lnEx t ).In this context, exergy is employed to account for the quality of energy and its capacity to perform useful work, offering a distinct perspective on the relationship between energy and economic growth by focusing on the efficiency and quality of energy in driving economic processes.Well-defined coefficients are utilized to convert the thermal energy content of different energy carriers into their exergy content [59].For instance, electricity is commonly considered as pure useful work, with an energy to exergy conversion rate of one.
In this study, ARDL bounds testing [99,100] and VECM Granger causality approaches are used to investigate cointegration and causality, respectively.The analysis unfolds in several steps.Initially, a unit root test is conducted to ascertain whether the variables under consideration exhibit a unit root, indicating non-stationarity.Following this, the ARDL approach is employed to investigate the presence of a long-run equilibrium relationship among the variables of interest.This step aims to assess whether these variables have a stable, consistent relationship over the long term.Subsequently, multivariate Granger-type causality tests are constructed within the framework of a VECM.This step allows for the determination of the direction of causality among the variables, shedding light on how they influence each other.By employing these methodologies, the study seeks to establish the presence of cointegration and the causal relationships among the variables in question.
Unit root test
Before conducting cointegration tests, it is imperative to assess the stationarity of the time series data.The ARDL modeling approach does not impose a strict requirement for all variables to be integrated of order 0, I(0), or 1,I(1), but it's important to note that the critical F-statistics used for cointegration analysis are not valid if any variable in the model is integrated of order two, I(2), or higher.
H. Ghadaksaz and Y. Saboohi
To ensure robust results, this study employs two different unit root tests: the augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) tests.Both tests aim to evaluate the null hypothesis, suggesting the presence of non-stationarity, against the alternative hypothesis, which indicates the absence of a unit root (stationarity).By conducting both ADF and PP tests, the study seeks to enhance the reliability and robustness of the stationarity assessments.
Cointegration test
The ARDL bounds testing approach is proposed to examine the long-run relationship among the variables.The ARDL model has a general form where the dependent variable in difference form can depend on itself (in lagged level or differences) and other dependent variables, which may be in lagged levels, contemporaneous differences, or lagged differences.In the context of this study, the following models are to be estimated within the ARDL framework, allowing for an extensive examination of the relationships among the variables: The selection of the optimal lag orders is guided by minimizing the values of the Akaike Information Criterion (AIC).Due to the limited number of observations, a maximum of four lags was employed in this process.Moreover, the study utilizes several diagnostic tests to assess the model's reliability and robustness.These tests include the Shapiro-Wilk test for normality, the Breusch-Godfrey (BG) test for serial correlation, the Breusch-Pagan (BP) test for heteroscedasticity, and Ramsey's RESET test for potential functional form misspecification.These diagnostic tests help ensure the model's reliability and robustness by assessing its adherence to key statistical assumptions, such as normality, absence of serial correlation, homoscedasticity, and correct functional form specification.
The null hypothesis of no long-run relationship between the variables in Equation ( 2) is H 0 : α j = 0, j = Y, E, K, L, T against the alternative hypothesis of cointegration H 1 : α j ∕ = 0, j = Y, E, K, L, T. Similar hypotheses can be derived for Equation (3).MH Pesaran et al. [101] generated two sets of critical values as the upper and lower bound critical values.If the calculated F-statistics lie above the band's upper level, the null hypothesis is rejected, indicating cointegration.If the F-statistics is below the lower critical value, the null hypothesis of no-cointegration cannot be rejected.Finally, the decision regarding cointegration is inconclusive if the calculated F-statistic falls between the two critical values.
Causality test
The third step in the analysis involves constructing Granger-type causality models augmented with a lagged error correction term (ECT), given that the series are cointegrated.In cases where no cointegration is detected, an ARDL short-run model is developed to investigate short-run causality.The Granger causality approach is applied to examine the direction of causality among the variables.The equations used for the causality analysis are presented in Equations ( 4) and (5).In these equations, the significance of the coefficients of the differenced terms serves to establish the short-term dynamics from any independent variable to the dependent variable.Furthermore, the existence of a long-term relationship is contingent on the statistical significance of the lagged error correction term, ECT t-1 , with a negative sign, underscoring the role of the error correction mechanism in restoring equilibrium over time.
The incorporation of ECT implies that the fluctuations in the endogenous variables are influenced by the extent of deviation from the long-term equilibrium.Therefore, the coefficients associated with these error correction terms signify the degree of disparity or divergence of the dependent variables from their long-run equilibrium values.In essence, ECT serve as a mechanism to correct or adjust for any deviations from the long-term balance in the model, facilitating an understanding of how the system returns to equilibrium following temporary imbalances.
Results and discussion
This section presents a comprehensive presentation of the study's findings, commencing with the outcomes of the unit root tests.Subsequently, it delves into the ARDL bounds testing and Granger-causality analyses.In the initial phase, the Augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) tests for the presence of a unit root were executed on both the level and first-difference terms of the time H. Ghadaksaz and Y. Saboohi series data, employing a significance level of five percent.The results of the stationarity tests are displayed in Table 3, where lnEI, lnEB, and lnET correspond to the natural logarithms of final energy consumption in the industrial, buildings, and transportation sectors, respectively.According to the findings from the ADF and PP tests, all variables exhibit stationarity after a single differencing operation.With the stationarity of the variables established, the subsequent step involves the application of the ARDL bounds testing method to scrutinize the existence of a long-term relationship among the targeted variables.
Cointegration evidence
The analysis results for models with computed F-statistics that exceed the upper bound critical value at a 5 % significance level, demonstrating valid diagnostic statistics, are presented in Table 4. Selecting the appropriate lag length before applying the ARDL bounds testing approach is a crucial preliminary step, ensuring that the analysis is based on a robust and reliable framework.
Cointegration is observed when GDP is considered as the dependent variable.It occurs when either final energy consumption in the industry sector (EI) or transportation sector (ET), along with capital stock, labor force, and trade openness, are the driving variables.Additionally, cointegration is found when total final energy consumption and final energy consumption in the buildings and transportation sectors are considered as the dependent variables, with GDP, capital, labor, and trade openness as the driving variables.
It is important to note that the long-term relationships of lnY ~ lnET and lnET ~ lnY are significant at the 5 % level, while other identified long-term relationships are significant at the 1 % level.This underscores the less robustness and reliability of lnY ~ lnET and lnET ~ lnY relationships.The presence of a (conservative) bidirectional long-term relationship between economic output and energy consumption in the transportation sector reflects the intricate and multifaceted nature of the relationship between these two variables.Final energy consumption in the transportation sector predominantly involves gasoline and diesel consumption.Gasoline is primarily used for private cars and taxis, while diesel is mostly consumed for freight transportation in Iran.The implication here would be that while passenger transportation is influenced by income levels, the role of freight transportation in contributing to economic output is distinct and driven by different factors.
Furthermore, the study results reveal that when GDP is modeled as a function of total final energy consumption, capital, labor, and trade openness, the null hypothesis of no long-run relationship cannot be rejected.This underscores the significance of examining the explanatory power of sectoral energy consumption concerning economic output.
In alternative specifications where energy is replaced with exergy, the overall results remain largely consistent with the main analysis.However, there are two exceptions: the specifications lnEx ~ lnY and lnY ~ lnExT, fail to pass either the F-statistic test or diagnostic tests.Comparing the F-statistic and adjusted R-Squared for paired specifications, it becomes evident that exergy and energy exhibit similar explanatory power with respect to economic output.In essence, the results reject the hypothesis that exergy provides a superior explanation for economic output compared to energy.This implies that, in the context of this study, the choice between exergy and energy as explanatory variables does not significantly impact the understanding of their relationships between energy and economic output.
The findings in this study regarding the long-term relationship between total final energy consumption and economic output align closely with the conclusions reached in prior research by M. Zamani [19], M. Mehrara [22], N. Apergis and C.F. Tang [23], C.C. Lee and C.P. Chang [24], S. Nasreen and S. Anwar [25], and M.R. Lotfalipour et al. [27].However, they do not fully concur with the results of studies conducted by S. Erdogan et al. [29], M.S. Gorus, and M. Aydin [28], P.K. Narayan, and S. Popp [21].These disparities in findings emphasize the complexity and multifaceted nature of the relationship between energy consumption and economic growth, which can be influenced by a multitude of factors and conditions specific to each study's context.
Regarding the long-run relationship between economic output and industrial energy consumption, the elasticity with respect to energy use is positive and significant (0.48).The positive and significant elasticity value shows that industrial energy use is a key factor in economic growth.The calculated elasticity is compatible with the values offered for developing economies [102].Research on developed countries [103] suggests that policies targeting energy consumption in such nations may have limited adverse effects on long-term economic growth.However, our study shows a different pattern in Iran, a developing nation.
Considering the findings of this study, which underscore the significant positive elasticity of industrial energy use on economic
Table 3
The number of differences required for time series to be stationary.growth, countries like Iran face a daunting challenge in limiting their energy consumption to reduce emissions.The pursuit of policies aimed at reducing energy use and greenhouse gas emissions must be approached with careful consideration.Specifically, such policies need to be crafted and implemented in a nuanced manner to mitigate any potential adverse impacts.This is particularly crucial given the intricate relationship between energy consumption, economic growth, and environmental sustainability highlighted in our research.Therefore, the formulation and implementation of policies to reduce energy consumption and emissions require a delicate balance, considering the economic development imperatives while also addressing environmental concerns.
Causality relationships
In this analysis, the joint significance of the differenced explanatory variables of energy or GDP indicates the presence of short-run causality either from energy to GDP or from GDP to energy, respectively.Meanwhile, the t-statistic on the coefficients of the lagged error-correction term provides insight into the significance of long-run causality.The results of examining short-run and long-run dynamics are presented in Table 5.
The findings confirm the economically influential role of energy consumption in the industry and transportation sectors in the long run.Energy consumption in the buildings sector, despite constituting a sizable portion of total energy consumption, does not exhibit either a propelling or restrictive effect on economic output in the long run.Similarly, total energy consumption does not play a dominant role in determining long-term economic output; rather, it appears to be influenced by income.Moreover, income would also affect transportation energy use in the long run.
However, in the short run, variations in total energy consumption significantly impact changes in economic output, and total energy consumption is notably responsive to changes in economic output.Consequently, the study reveals the existence of bidirectional short-run causality between total energy consumption and economic output, aligning with the feedback hypothesis, highlighting the interdependent nature of this relationship in the short run [104].Furthermore, income would affect both transportation and industrial energy use in the short term.
The application of the ECM approach reaffirms the results of the bounds testing approach.As required, ECT coefficients in the proposed dynamics are both negative and significant at the 1 % level.The error correction term coefficient is called the adjustment coefficient or speed of adjustment [105].It illustrates how much of the adjustment to equilibrium takes place in each period.A ECT of − 1 shows that the adjustment is instantaneous, or 100 % of the adjustment takes place within a year.Regarding the ECT coefficient for the lnY ~ lnEI dynamic, which is − 0.38, it will take approximately less than three years to reach long-run equilibrium in case of any shock or disturbance.The estimated adjustment speed values, as shown in Table 6, are comparable with those computed by other studies.
In addition to assessing normality, serial correlation, heteroscedasticity, and model specification, we rigorously examine the stability of coefficients in our analysis.This examination is achieved through the application of cumulative sums of standardized residuals and cumulative sums of squared standardized residuals, commonly referred to as CUSUM and CUSUM-of-squares tests [106].These tests aim to detect any structural changes in the coefficients over time.If the paths of fluctuations cross the boundaries calculated for a specified significance level (usually set at 0.05), these fluctuations are considered statistically significant, which results in the rejection of the null hypothesis that no structural change has occurred.As demonstrated in Fig. 3 for the relationship between GDP and final energy consumption in the industry sector, similar checks for the entirety of the long-term relationships verify the stability of coefficients in the developed model specifications.This comprehensive approach ensures the reliability and robustness of the findings.
Implications from other case studies
Building upon the concept of productive energy use previously discussed, empirical analysis of the energy-economy relationship in Iran statistically validates this notion.The findings underscore that energy consumption in the industry and transportation sectors significantly propels economic output, particularly in Iran, characterized by its reliance on resource-based industries [107].Moreover, the analysis highlights income as the pivotal factor shaping total energy consumption and buildings energy use.This distinction elucidates how varying income levels and sectoral energy usage dynamics intricately influence the energy-economy relationship, offering valuable insights within the Iranian context.
To bolster the robustness of our earlier findings and broaden the examination of selected sectoral energy consumption as production factors in globally utilized integrated energy and economy systems models, we conducted an analysis of long-term relationships between sectoral energy use and economic output across diverse countries.While prevailing integrated assessment models like MESSAGEix [108] typically employ a nested Constant Elasticity of Substitution (CES) production function with inputs of capital, labor, and all end-use energy (Equation ( 6)), our study suggests a potential refinement to this approach.Specifically, it proposes focusing on specific sectoral energy uses rather than encompassing all energy end-uses.Furthermore, our findings advocate for shifting the focus from summing all sectoral energy usage to aggregating productive energy end-uses.This refinement carries manifold implications: it enhances specificity in understanding the nexus between economic output and energy consumption, enables targeted interventions for enhancing energy productivity, streamlines Integrated Assessment Model (IAM) structures for heightened modeling precision, and facilitates tailored climate policies to address sectoral disparities [97].
Table 6
Estimates of adjustment speed from this study and some other studies.H. Ghadaksaz and Y. Saboohi Here, Y t , K t , L t , and E t represent GDP, capital stock, labor force, and sectoral energy use proxy variables, respectively.Additionally, ρ = ϵ − 1 / ϵ where ϵ denotes the elasticity of substitution, α represents the capital value share parameter, a signifies the production function coefficient of capital and labor, and b denotes the production function coefficients of the various sectoral energy end-uses.This cross-country analysis employs a methodology akin to that employed in the preceding phase of the research.While employing panel data analysis could yield more robust inferences, it is essential to acknowledge that due to data constraints, the analysis timeframe was confined to the period between 1971 and 2018 [97].
The results of the cross-country analysis, as presented in Table 7, compellingly support the proposed relationship between sectoral or total energy consumption and economic output.Particularly, the specification of lnY ~ lnEI is found to hold in 12 out of 14 cases, while the specification of lnY ~ lnE is only valid in 3 countries.Additionally, the specification of lnY ~ lnET is observed to be valid in just 2 countries.Although both specifications of lnY ~ lnE and lnY ~ lnEI are applicable to Chile, France, and Korea, employing industrial energy consumption as the explanatory variable consistently leads to a higher adjusted R-Squared in these cases.These findings advocate for an alternative formulation for the production function within integrated assessment models, suggesting the adoption of industrial energy end-use instead of encompassing all energy end-uses.This reaffirms the pivotal role of industrial energy use in driving economic production.Based on the empirical results, the prolonged downturn in industrial value-added in the EU,
Table 7
The results of the cointegration analysis across different countries.despite the subsequent decrease in energy prices following the post-Russia war price hike, gains clarity, emphasizing the enduring impact of energy shortages in the industrial sector [109].
The cross-country findings concerning Iran underscore that energy consumption in the industry sector is the sole energy-related factor capable of explaining economic growth.It differs slightly from the earlier results where energy use in the transportation sector was also explaining; however, the F-statistics value of the lnY ~ lnET model for Iran reported in Table 5 is not significantly higher than the upper bound critical value at 5 percent.The observed disparities in results, could be attributed to variations in the timeframes and data sources.The earlier examination of the long-term relationship between energy and the economy in Iran was based on data derived from national databases over an extended period.In contrast, the cross-country analysis relies on internationally recognized and available data, which may contribute to differences in the outcomes.
Conclusion and policy implications
The current investigation stands as a distinctive research endeavor delving into the intricate relationship between economic output and sectoral energy consumption, an exploration that extends beyond the ambit of total energy consumption or energy use by resource type.This study seeks to probe this complex relationship considering the influence of key control variables such as capital stock, labor force, and trade openness.Employing a methodological framework comprising a multi-step approach, the study unfolds as follows: (1) conducting stationarity tests for unit root, (2) undertaking bounds testing for cointegration, and (3) executing a comprehensive causality analysis, utilizing the ARDL ECM method.This systematic method emphasizes its relevance in furthering our understanding of the linkages between economic production and energy use in Iran, guiding the formulation of effective policy measures.
While a conclusive long-run relationship has been ascertained, extending from economic output to total energy consumption for Iran, the converse causal link is not substantiated by meaningful empirical evidence.This finding challenges the growth hypothesis positing that total energy consumption significantly contributes to economic growth.The rejection of this hypothesis implies that energy conservation-oriented policies may not hinder economic growth in general [104].
Furthermore, our empirical findings reveal a pronounced degree of heterogeneity in the intricate relationships between sectoral energy consumption and economic output.Particularly noteworthy is the emergence of energy consumption in the industrial sector as the most pivotal determinant in explaining aggregate economic output over the long term.This observation supports the growth hypothesis, indicating that industrial energy consumption plays a crucial role in economic growth, both directly and as a complement to labor and capital in the production process.Consequently, this underscores the significant repercussions of indiscriminate energyrelated policies and measures concerning energy provisioning for industrial sectors.Such measures, if not well-calibrated, have the potential to exert adverse effects on overall economic performance.Thus, our study strongly advocates for a reevaluation of current energy management plans in Iran, especially in light of recent measures to limit energy supply to industries in favor of households.Based on empirical evidence demonstrating the significant role of industrial energy consumption in driving economic production, it is evident that such plans may pose substantial risks to the economy in the long and short term.Therefore, there is a pressing need to replace these measures with energy efficiency-focused policies and demand-side management strategies.
The study reveals a significant industrial energy use elasticity value of 0.48 in Iran, which differs from the values observed in developed countries, where the link is weaker.This significant elasticity value emphasizes the critical challenge for countries such as Iran in reducing energy consumption to mitigate emissions while sustaining economic growth.Iran's consistent pursuit of a resourcebased industrialization approach in recent decades has led the country to be significantly reliant on energy-intensive industrial operations.Our study's implications resonate with a clear message: the continued and assured supply of energy resources for industries plays a pivotal role in sustaining GDP growth.However, this guarantee comes at the cost of environmental concerns, given Iran's predominant reliance on fossil fuels for almost 99 % of its energy supply [2].In response, policymakers face a pressing imperative to align with alternative energy supply and demand systems, as the current carbon-intensive energy supply model appears inherently incompatible with long-term sustainable development.Thus, the transition to renewable energy sources integrated into the industrial energy mix, alongside enhancing overall energy efficiency, emerges as an imperative strategic move in pursuit of a sustainable energy-environment-economy equilibrium.
Moreover, the study's findings bolster a unidirectional inference of long-term causality, pointing from economic growth to final energy consumption of buildings.This unidirectional relationship implies that energy conservation policies, particularly those encompassing demand management measures within the building sector, are unlikely to adversely affect GDP.This observation also underscores the evidence that higher income levels are positively associated with enhanced well-being and energy use in buildings.In essence, our study's results advocate for fostering energy-efficient practices within the building sector to harmonize with the overarching goal of sustaining economic growth and enhancing the quality of life.
The heterogeneous outcomes witnessed across various sectors highlight the pivotal significance of productive energy utilization in fostering economic growth.In this regard, industrial energy consumption emerges as productive energy usage, given its direct impact on economic output.In stark contrast, energy consumption within the buildings sector is primarily geared towards providing comfort and well-being services that, while integral to overall quality of life, are not intrinsically tied to economic productivity.
The empirical findings derived from an examination of the dynamics within the context of Iran find resonance within the broader scope of panel data analysis.The elucidated long-term relationships between economic output and sectoral energy consumption, based on data encompassing fourteen diverse countries, reaffirm the centrality of industrial energy consumption as a key driver of economic production.In over 85 % of scrutinized economies, industrial energy consumption emerges as a pivotal factor in explaining variations in economic growth, whereas total energy consumption falls short of meeting the criteria for designation as a right production factor in almost 80 % of case studies.This observation bears significant relevance within the context of integrated assessment models employed H. Ghadaksaz and Y. Saboohi in devising strategies for climate change mitigation.It signals the necessity for a reevaluation of the underpinning formulations within such models, with the potential to yield different robust results.The fundamental implication is that the role of industrial energy consumption as a productive energy use driver has far-reaching implications for understanding economic growth dynamics, calling for a reexamination of the models informing climate change mitigation strategies.
In an alternative analytical framework, the conventional measure of energy is substituted with exergy to scrutinize its explanatory power concerning economic growth.A comparative examination of the results shows that both exergy and traditional energy use exhibit similar qualifications in elucidating economic output variations, effectively countering the notion that exergy might serve as a superior explanatory variable concerning changes in economic production.
Furthermore, the findings and discussions, particularly regarding the dynamics between economic output and energy use in the buildings and transportation sectors, underscore the novel approach to energy and climate policies, advocating a shift towards focusing on energy services rather than mere energy consumption.The findings of the study reaffirm the need for innovative policies that center on redefining energy use within the context of service provision [110].For instance, considering mobility not as energy consumption but as a service derived from energy utilization opens avenues for transformative policy interventions.Crafting effective policies to achieve this delicate balance requires meticulous attention to avoid adverse impacts.It underscores the necessity for precise policy formulation and implementation to address both economic and environmental imperatives effectively.
Replicating this analysis using diverse analytical methods and alternative productivity proxy variables across a more extensive spectrum of countries and a longer temporal horizon would undoubtedly yield more robust and generalizable implications, particularly as they pertain to the methodological advancement of integrated assessment models.In addition, future empirical inquiries might consider introducing variables related to economy structure or environmental externalities [111] to dissect the intricate interplay among energy, economy, and environmental factors, particularly regarding the cross-country analysis studies.It would be an attempt to bridge the energy consumption and economic output literature with the literature on the relationship between economic output and the environment.
K) ARDL Granger causality from real GDP to energy consumption in New Zealand B.S. Warr and R.U.Ayres [59] Y = f(K, L, EX); Y = f(K, L, UW) vector error correction output growth does not drive energy consumption in US.Y. Wang et al. [60] Y = f(K, L, E) ARDL existence of short-run and long-run causality running from energy consumption to economic growth in China M. Shahbaz et al. [83] Y = f(K, L, E, TR, FD)
Table 1
Summary of studies regarding the energy and economic growth relationship in Iran.
Table 2
Summary of primary studies on energy and economy nexus based on the neoclassical growth theory.
[58]f(RE, K, L) vector error correction presence of unidirectional causality from economic growth to renewable energy consumption in the short-run and a bidirectional causal relationship in the long-run in 12 European Union countries J.H. Yuan et al.[18]Y = f(K, L, E) Johansen unlike Granger-causality outcomes across different energy carriers in China N. Bowden and J.E. Payne[56]Y = f(IE, RE, CE, TE, E, K, L)Toda-Yamamoto different relationships between real GDP and energy consumption across sectors in the US.M. Bartleet and R.Gounder[58]
Table 4
The results of the ARDL cointegration test with asymptotic critical value bounds of case III.All the specifications consider capital, labor, and trade openness as independent variables besides the mentioned one.
a b in p-values.
Table 5
Causality analysis results.
All the specifications consider capital, labor, and trade openness as independent variables besides the mentioned one.
a b in p-values.H.Ghadaksaz and Y. Saboohi | 9,165.8 | 2024-05-28T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
The Effectiveness of Mini Primer STR CODIS in DNA Degradation as the Effect of High-Temperature Exposure
Background More and more today, forensic identification through deoxyribonucleic acid (DNA) examination has achieved greater recognition in supporting Indonesia's law enforcement. Such examination is to determine the origin of a child, paternity cases, genealogical relation, or identifying unknown crime victims. However, along with the development of this DNA material examination, problems arise. DNA undergoes a degradation, commonly known as degraded DNA, which is one of the serious issues frequently encountered by forensic and DNA experts. Some forensic DNA experts take one of the alternatives to overcome this issue by implementing a mini primer set that is through a method to reduce the size of STR assays on DNA core locus examination. Methods In this study, the writers conduct research using the mini primers of CSF1PO, FGA, and D21S11 of the molar teeth exposed to 500°C temperature for 20 and 30 minutes and 750°C for the same amount of time. Result The findings show the DNA contents of molar teeth significantly (p < 0.05) decreased as the effect of high-temperature exposure. PCR result visualization shows CSF1PO is the only locus detected with mini primer exposed to 750°C temperature for 30 minutes (the highest exposure during this research). Conclusions This finding suggests that this locus is potential in examining identification through DNA analysis, especially on a degraded condition as the effect of high-temperature exposure. Besides, this could accelerate the identification process especially on mass disaster events or criminal cases.
Introduction
More and more today, forensic identification through deoxyribonucleic acid (DNA) examination has achieved greater recognition in supporting Indonesia's law enforcement. Such examination is to determine the origin of a child, paternity cases, genealogical relation, or identifying unknown crime victims. This has been proven by the recognition of the examination invented by Sir Alec Jeffrey as one of the shreds of evidence in both judicial court and religion court since 1997. DNA identification played a significant role in identifying the victims of the Bali bombing back in 2002 [1,2].
However, along with the development of DNA material examination, problems arise. One of the serious problems encountered by forensic DNA and other experts in this field is a DNA in degraded condition or commonly known as a degraded DNA [3,4]. Some forensic DNA experts take one of the alternatives to overcome this issue by implementing a mini primer set that is through a method to reduce the size of Short Tandem Repeat (STR) assays on DNA core locus examination [5].
This research applied the average temperature of 500°C as treatment, the same as that in the previous research conducted by Thanakum (1999). Besides, the 750°C was applied, the same temperature as the research conducted by Sosiawan (2007) with 20 minutes duration, the longest duration of Thanakum's research, and 30 minutes duration, the longest duration of Sosiawan's [6,7].
Nevertheless, until recently, there is no specific research on the effectiveness of the mini primer set to use as an alternative in forensic DNA identification using degraded DNA, especially on the DNA core. It is important to determine which loci are potential (especially CSF1PO, FGA, and D21S11) to apply in the degraded DNA examination.
The objective of this research is to determine the effectiveness rate of core DNA mini primer set utilization on CSF1PO, FGA, and D21S11 loci of assumed degraded DNA using the Polymerase Chain Reaction (PCR) method.
Materials and Method
This study is experimental laboratory research with a randomized posttest-only control group design. The samples of this study are 16 down, incubated at room temperature for 1-3 minutes, and centrifuged at 4000 rpm for 1-2 minutes at 4°C. The supernatant was carefully removed to prevent the removal of DNA (pellet). The pellet was washed twice with 0.8-1 ml of 75% ethanol. The tube was turned upside down 3-6 times each repetition. The tube was positioned upward for 0.5-1 minute, and the 75% ethanol was removed by pipetting or decanting. The pellet was dried up by opening the tube for 5-15 seconds. The pellet containing DNA was diluted on 25 μl distilled water, sufficiently vortexed, and stored at -20°C temperature.
Electrophoresis
Using 2% Agarose Gel. Procedures in creating agarose gel were as follows: 40 ml TBE 0.5X mixed with 0.8 grams of agarose inside Erlenmeyer flask. The mixture was stirred for 15 minutes and put into an oven at 60-64°C until no agarose is adhering on the wall flask. The mixture was poured into electrophoresis molds and let the gel freeze. Afterward, TBE was poured evenly on the gel, and the comb was lifted carefully.
Results
The weight of the tooth samples (second molar teeth) taken from corpse T4 was measured before and after the treatment. The average weight of the tooth samples before and after the treatment are illustrated in Table 1. Table 1 shows that the weight of the tooth samples decreases by 50-60% after the treatment. The following Table 2 indicates the average of DNA content from the tooth sample.
PCR Amplification: mini primer FGA, CSF1FO, and D21S11: [8,9] Table 2 shows the DNA content of the tooth sample decreases as the effect of high-temperature exposure. There are some uppercase numbers (1 until 6). Rather than functioning as a reference, these numbers indicate significance result after being exposed to 500°C for 20 and 30 minutes: exposed to 750°C for 20 and 30 minutes. Figure 1 shows the electrophoresis visualization of the tooth sample FGA locus PCR results with agarose gel 2%. The visualization of the tooth sample FGA locus PCR result with mini primer shows that the exposure of 500°C for 20 minutes is still detectable within 118-170 bp range, while the exposure of 500°C for 30 minutes and 750°C for 20 and 30 minutes is imperceptible.
The visualization of the PCR result presented in Figure 1 shows that only 500°C exposure for 20 minutes is detectable within the 118-170 bp range. The following image is the visualization of the tooth sample CSF1PO locus PCR result with agarose gel 2% after 500°C exposure for 20 and 30 minutes as well as 750°C exposure for 20 and 30 minutes.
The visualization of the PCR result presented in Figure 2 shows that exposures of 500°C for 20 and 30 minutes, and 750°C for 20 and 30 minutes are detectable within the range of 89-129 bp. The following image is the visualization of the tooth sample D21S11 locus PCR result with agarose gel 2% after 500°C exposure for 20 and 30 minutes and 750°C for 20 and 30 minutes.
The visualization of the PCR result presented in Figure 3 illustrates the exposure of 500°C for 20 minutes until 750°C for 30 minutes within 153-221 bp range is undetectable.
The complete result of DNA examination detecting FGA, CSF1PO, and D21S11 loci on tooth samples using a mini primer after high-temperature exposure is presented in Table 3. Table 3 shows that STR examination on tooth sample DNA through FGA locus with an exposure of 500°C for 20 minutes and 750°C for 20 minutes is still detectable (31.25% samples). Furthermore, CSF1PO locus on 500°C and 750°C for 20 and 30 minutes length of exposure is also detectable. However, DNA on the D21S11 locus is undetectable after 500°C for 20 minutes exposure until 750°C for 30 minutes exposure. 3 Analytical Cellular Pathology reduces the ability to detect Short Tandem Repeat (STR) up to 93% [10]. The minimum DNA content required in forensic DNA examination is 50 ng and 20 ng, respectively, while Butler (2005) argues that the minimum DNA content used in the STR examination is 0.5-2.5 ng [11,12].
Discussion
In addition to the DNA content of the sample, Polymerase Chain Reaction-(PCR-) based DNA examination also requires adequate DNA quality. DNA quality here means that the DNA must not be in a degraded condition. Severely degraded DNA may cause the primer used cannot adhere to the target DNA to be replicated [2,[13][14][15]. To obtain adequate visualization results, adequate DNA purity and proper DNA content are required so that the DNA can be used as a material in DNA examinations including the identification process and paternity test [15].
DNA degradation as the result of abnormal exposures, such as high temperatures, may be caused by irreversible damage on DNA hydrogen bonds. This condition causes damage to DNA's purine-pyrimidine coupling as the main component of the DNA structure [4,16]. Analytical Cellular Pathology This study employed samples taken from a corpse with unknown residency (Tempat Tinggal Tidak Tetap/T4). The degradation of DNA samples after death is an endogenous process beginning soon after death. DNA degradation may occur altogether with the decomposition process through autolysis and bacterial decomposition.
Postmortem DNA degradation as the result of the autolysis process can occur in the forms of pyrimidine modification, baseless sites, intermolecular crosslink, and DNA's low molecular weight as the result of strand breakage [4,15,16].
The findings of this study show that only the CSF1PO locus with the mini primer is detectable after being exposed to 750°C temperature for 30 minutes, the highest temperature and the longest duration applied in this study. This finding indicates that DNA examination on tooth samples through STR locus detection results in different responses on different temperatures exposed to tooth samples.
Teeth are also composed of the most complete mineral hard tissue. The mineral is known as apatite, which mostly comprises hydroxyapatite. Teeth also contain essential secondary minerals, which are higher compared to those of bones, namely calcite, limonite, pyrite, and vivianite resulting in teeth with strong endurance or protection [14,17].
The use of a mini primer is an alternative to substitute standard primer on DNA with the degraded condition. The use of standard primer on degraded DNA will be less successful. Mini primer is a result of standard primer redesigning through amplicon size reduction by shifting the position of primer as close as possible to the loop area [15]. Mini primer is an interesting alternative in conducting DNA forensic analysis on degraded DNA, compared to forensic analysis using mtDNA.
The success in detecting this locus is reinforced by the different amplicon product and guanine-cytosine coupling or GC content on each locus. GC content is with higher stability against denaturation compared to adenine-thymine coupling [15].
The measurement of the GC content ratio shows a significant result. The result of the GC ratio measurement of each locus is CSF1PO: 42.6%; FGA: 35.7%; and D21S11: 34.1%. Furthermore, consecutive adenine is a potential target of DNA degradation caused by high temperatures. Adenine is the easiest base to oxidize [15,16].
The hurdle/obstacle in this research is the obtained sample is not homogenous because of the limitation in the number of the corpse with unknown residency. Another problem is the length of time to order the primer in advance.
Conclusion
CSF1PO locus, still detected with a mini primer on 750°C temperature exposure for 30 minutes, is potential in examining identification through DNA analysis. This is true especially for degraded condition as the effect of being exposed to high temperature, accelerating the identification process, mainly that of mass disasters and other criminal cases as well. The limitation of this study is the use of the locus DNA is only limited to the population in Java so that we cannot generalize the suggestion in DNA examining procedures. Further study can examine each population in Indonesia so that we can generalize and promote the use of DNA even though in a high exposure temperature. This study correlates to the other study especially in the DNA examination after exposure to high temperature.
Data Availability
The (table data and picture) data used to support the findings of this study are included in the article. The author and the co-author strongly agree to share it and included it in the main manuscript. | 2,735.6 | 2020-12-23T00:00:00.000 | [
"Law",
"Biology"
] |
Data driven of underground water level using artificial intelligence hybrid algorithms
As the population grows, industry and agriculture have also developed and water resources require quantitative and qualitative management. Currently, the management of water resources is essential in the exploitation and development of these resources. For this reason, it is important to study water level fluctuations to check the amount of underground water storage. It is vital to study the level of underground water in Khuzestan province with a dry climate. The methods which exist for predicting and managing water resources are used in studies according to their strengths and weaknesses and according to the conditions. In recent years, artificial intelligence has been used extensively for groundwater resources worldwide. Since artificial intelligence models have provided good results in water resources up to now, in this study, the hybrid model of three new recombined methods including FF-KNN, ABC-KNN and DL-FF-KNN-ABC-MLP has been used to predict the underground water level in Khuzestan province (Qale-Tol area). The novelty of this technique is that it first does classification by presenting the first block (combination of FF-DWKNN algorithm) and predicts with the second block (combination of ABC-MLP algorithm). The algorithm’s ability to decrease data noise will be enabled by this feature. In order to predict this key and important parameter, a part of the data related to wells 1–5 has been used to build artificial intelligence hybrid models and also to test these models, and to check this model three wells 6–8 have been used for the development of these models. After checking the results, it is clear that the statistical RMSE values of this algorithm including test, train and total data are 0.0451, 0.0597 and 0.0701, respectively. According to the results presented in the table reports, the performance accuracy of DL-FF-KNN-ABC-MLP for predicting this key parameter is very high.
effective in forecasting the underground water level 12 . Emamgholizadeh and Mohammadi presented a new hybrid method based on SVM, PSO, and IWO models with SVM-PSOIWO structure for estimating soil exchange capability (CEC). Based on the findings of this paper, it can be concluded that the novel combination algorithm, when applied to the prediction of a three-month period with RMSE (R2) of 0.229 Cmol + kg −1 (0.924), has a high degree of accuracy 13 . Vadiati et al., by FL, ANFIS and SVM predicted the underground water level in the Tehran Karaj plain. The data used in this study are: total rainfall, evaporation of groundwater, average temperature, and total transpiration and monthly average river flow. Their results have been shown ANFIS is highly accurate in prediction of underground water level, but all three methods used in this study have good performance. The models used in this study predict the underground water level for the next 1 and 2 months, and the prediction of these models for the next 3 months is also acceptable 14 . Mohammadi predicted Peru's hydrological conditions over the course of 3, 6, and 24 months using the ANN-FA model. The standardized precipitation index (SPI) of the surrounding areas is used as input data in this study. The findings for this new approach, which have RMSE = 0.29 and R = 0.94, demonstrate the excellent level of performance accuracy of this algorithm. He noted that this model might also be useful in other areas 15 . In this study, 2112 data sets collected from 8 wells were used to predict the underground water level of Khuzestan region in Iran. To predict this important parameter, FF-KNN, ABC-KNN and DL-FF-KNN-ABC-MLP algorithms were used. A characteristic and abilities of this algorithm is high accuracy, high speed and good performance. The results show that the DL-FF-KNN-ABC-MLP algorithm has an accuracy of performance over the other algorithms introduced in this article.
Materials and methods
KNN algorithm. The KNN algorithm is one data mining algorithms that is primarily used in data classification. This algorithm finds k samples of the training data which are closer to the test sample than all the training data and calculates the average output of these k samples and considers them as the estimated final value for the test sample. The requirements of this algorithm include: First, we need to have a set of samples with output or labeled data, second, we need a similarity unit or distance to calculate the distance between two samples, and third, we need to specify a k value to determine the number of neighbors. KNN or WKNN algorithm is the same as KNN, with the difference that for each test sample, each sample from the k set that is obtained, according to how far it is from the test point, a coefficient is placed for that sample to Those that have a greater distance have less effect on the output and closer samples have a greater effect 16 In Eq. (2), the value of C represents the label of the samples or the output value of the samples. This equation is used for KNN, but in WKNN, each coordinate axis is weighted according to its distance from the test data. The value of this weight is derived from Eq. (3): In Eq. (3), the variable w is the weight for each of the k samples. In the following, the final value is calculated according to the weights using Eq. (4): Bee algorithm. The bee algorithm was developed in 2005. This algorithm simulates the feeding behavior in bee groups 17 . Bees can be divided into three categories: foraging bees and foraging bees. A bee that goes to a predetermined food source is called a worker bee, a bee that conducts a random search is called a foraging bee, and a bee that moves in the dance area to decide is called a foraging bee. Choosing a food source that is left over is called a fodder bee.
Firefly algorithm. Fireflies are a kind of cockroaches that emit yellow and cold light in the process of bioluminescence. For various reasons (about which there is a difference of opinion, such as reproduction or creating a defense mechanism), night owls are more likely to move towards a night owl that is brighter than themselves. The distance of night lights from each other, the amount of ambient light absorption, the type of light source, and the amount of light emitted from the source are factors that affect the light received from a source. (1) (3) (4) www.nature.com/scientificreports/ The firefly algorithm is an optimization method which finds the optimal solution by simulating the behavior of the firefly 18-20 . Multilayer perception. Neural networks are intended to create patterns act as a human brain. The neural network works by creating an output pattern based on the input model delivered provided to the network 21,22 . Neural networks are composed of several processing elements or neurons that receive and process input data and ultimately provide an output from it. Input data may be raw data or the output of other neurons 23 . The output can be the final product or input for other neurons. An artificial neural network consists of artificial neurons, which are actually processing elements. Each neuron has several inputs and it assigned a weight to each input 24,25 . The average output of each neuron is obtained from the sum of all inputs multiplied by the weights. The final output is done by applying a transformation function.
Multilayer perception, or MLP, is an architecture of artificial neural networks in which it divided the neurons of the network into several layers 26 . In these networks, the first layer is the input and the last layer is the output, and the intermediate layers are called hidden layers 27 . This architecture can be called the most widely used architecture of neural networks.
Hybrid methods. In this paper, it developed a hybrid method. This combined method results from the combination of several methods, such as FF, KNN, ABC, MLP and K-Means. In general, this combined method can be divided into three general parts and also has two phases of training and testing. To increase the prediction accuracy, we used the data of 8 wells and using the K-Means clustering method, we first put the wells that have similar behavior in one group, and then in the next block, using the data of the wells at time t and which group each well is in, the neural network estimates the output of the well for time t + 1.
In the new method, the KNN method is used as the basic method for classification and the FF optimization algorithm is used for find the optimal coefficients control's parameter for the input data. In addition, MLP was used to estimate the output values, and we used the ABC algorithm for better training. To perform classification, we must use a new output value that we define ourselves. Therefore, we add a new output to the dataset and get its value using the K-Means algorithm. For the classification block, the input data is sent along with the new output. In the second step, when the classes are determined, the data is sent to the second block to estimate the value. In this block, the ABC-MLP combination is used to estimate the value. For this block, the input values are sent along with the new output. The control parameters for the approaches utilized in this article are listed in Table 1. Figure 1 shows the flow chart diagram of the training stage of the new method. The new method is made of two phases, training and testing, and we will first look at the training phase.
We have to use different data for each of the two phases. Therefore, we considered 70% of the data as training data for the training phase and the remaining 30% as test data for the test section. Based on the 70% of the data that we considered for the training part, we left 30% for validation.
Training stage. First, the data should be normalized, which is done using Eq. (5).
In Eq. (5), variable M is the number of inputs, x il is the lth input of the ith sample. The Max(xl) value is the lth largest input number and Min(xl) is the lth smallest input number. Figure 2 shows the block diagram related to the training stage. www.nature.com/scientificreports/ After data normalizing, we have to add a new entry to the data, which specifies the number of the cluster or class to which each data belongs. We add that input to be used by the classification block and increase the accuracy of the estimate. Determining which class each data belongs it does to by the first block, but since the data is not labeled and not clustered, the number of classes and their data must be determined for this block first. Thus, using the K-Means block, we first determine the optimum number of classes and data for each class. The www.nature.com/scientificreports/ Davies Bouldin value was used to arrive at the optimum number of classes. The smaller this value is, the more optimal the number of classes is ( Table 2).
The smaller the Davies-Bouldin distance for a k, the more suitable the value of k is. Therefore, for the data of these 8 wells, the value is three clusters. Now, we divide all the data into three clusters using the K-Means algorithm and add a new output to each of the data, which stores the data class number and has values of one. It is up to three. In the Fig. 3, you can see the blocking of the wells. Figure 4 shows the well 1 validation. This graph displays the great accuracy of the algorithm's results. Additionally, demonstrate how this technique may reduce noise in enormous data sets.
To create three clusters from the data of eight wells, we added a new column as the class number and assigned values between 1 and 3 based on the K-Means algorithm's output. This operation only occurred during training, not during testing. The first block used data from outputs 1, 2, and 3 for categorization. To improve accuracy, we considered coefficients for each input using the firefly method to determine their best value. As optimization methods like firefly generate different solutions each time, they run due to a large number of optimal solutions, we obtained four weights with values of 0.1542987, 0.9254255, 0.4256712, and 0.6732144 from the algorithm's www.nature.com/scientificreports/ best answers for the four inputs. All inputs have an equal impact on output but may have different coefficients depending on their impact on output value. (6) to (12) are given to determine the statistical comparing error of these algorithms. Based on the results presented in the results section and using these equations, we can compare the algorithms' performance accuracy.
Study area
The area under investigation is situated within the longitude range of 388,000 to 400,000 and the latitude range of 3,496,000 to 3,508,000. This region is located in the folded Zagros geological division of Iran and comprises anticlines and transects that vary in width, length, and height. The general orientation of this area is roughly northwest-southeast. The geological formations present in this region consist of rock units from the second and third ages as well as Quaternary sediments. The oldest rocks in this area are the thin limestones found in the Ilam-Soruk layer, followed by Pabdeh and Gurpi marl formations, Asmari limestone formations, chalk and marl layers from the Gachsaran formation, Bakhtiari conglomerate, and alluvial sediments arranged chronologically. The area being studied has two types of aquifers, alluvial and karst, from a hydrogeological perspective. The alluvial aquifer is located in the upper part of the Qalehtol plain and either reaches an impermeable bedrock or transforms into a karst aquifer at deeper levels. The karst aquifer is formed in the Asmari formation limestone and is limited by the impermeable Pabde formation below. There is no Gachsaran Formation outcrop on the northeastern side of the belt-long anticline, but on the southwestern side, it covers some areas of the Asmari Formation. Three limestone wells drilled by Khuzestan Water and Electricity Organization around the northwestern tip of Kamerdaraz anticline indicate a karst aquifer with high transfer and storage capabilities. The Asmari formation sinks under Barangerd plain from southeast to Qalehtol plain until it re-emerges in Haft Cheshme mountain north of Qalehtol. Two limestone wells with irrigation are also present in southeast Qalehtol plain. The northeastern edge of the anticline rises in Barangerd plain and finds a reversed state of syncline, while suspended sediment is enclosed on both sides by Pabdeh Formation, and in Tang Kurd, limestone outcrop represents termination of the aquifer.
Data base
The underground water level, which determines the level of fresh underground water and is used for drinking water and other applications including agriculture, etc., depends on various parameters such as the underground water level (for the previous three consecutive years), rainfall It depends on rain, river discharge and harvest discharge.
In order to predict the underground water level, it collected 2112 data points using artificial intelligence hybrid algorithms from information related to 8 wells in an area of Khuzestan province (Qale-Tol area). This information includes the flow rate of the river entrance (feeding fresh water resources), underground water level, rainfall and underground water withdrawal by examining different time delays, as well as the level of underground water during the years 1992 to 2013. Is the important point in these data is that to determine the www.nature.com/scientificreports/ output (determining the underground water level for time t), the information of the input parameters related to time t, t-1 and t-2 has been used. The statistical parameters related to 8 wells are reported in Table 3 for the groundwater level (m), rainfall (mm), river rate (m 3 /s) and discharge rate (m 3 /s), respectively, for the information related to 8 wells from 1992 to 2013. Based on this model, it is possible to determine the parameters of the underground water level as a function of the parameters of the underground water level (for the previous three consecutive years), rainfall, river discharge and harvest discharge (Eq. 13); In this equation, Q = groundwater level, R = rainfall, P = river rate and O = discharge rate. The method used to describe input and output data in scientific articles is the use of cumulative distribution functions. It also used this method in this article to describe the data. It described information about the distribution of 2112 data points: Figure 5 shows information about normal distribution functions for predicting the groundwater level. The value of cumulative distribution function for groundwater level (Q) as 140 is approximately 16% and for 16 < Q < 154 it is approximately 45% and for the rest of the data this value is approximately 39% for Q > 154. For rainfall (R) as 12, it is approximately 56%, and for 12 < R < 14, it is approximately 6%, and for the rest of the data, this value of R > 14 is approximately 38%. For river rate (P) as 2852, it is approximately 48%, and for 2852 < P < 3207, it is approximately 46%, and for the rest of the data, this value of P > 3207 is approximately 6%. For discharge rate (O) as 5115, it is approximately 10%, and for 5115 < O < 5476, it is approximately 41%, and for the rest of the data, this value of O > 5476 is approximately 49%. As it is clear in Fig. 5, the data related to river discharge and harvest discharge have a normal distribution, and the data related to rainfall and underground water level are non-normally distributed.
Result and discussion
As mentioned before, the aim is to predict the underground water level from 2112 data points and using artificial intelligence hybrid algorithms from the information related to 8 wells in one region of Khuzestan province. The recombinant hybrid artificial intelligence algorithms used are FF-KNN, ABC-KNN and DL-FF-KNN-ABC-MLP algorithms. To develop algorithms and test them, we used well information from wells 1-5, and for their development, we used well information from wells 6-8. In order to developed these algorithms, 70% of the data related to 5 wells (wells 1-5) was used as training and 30% of this data was used as testing (in order to make a proper comparison between the algorithms, a similar train and test sub set have been used). Table 3. Determining statistical parameters for underground water level, rainfall, river discharge and harvest discharge for the information related to 8 wells related to the years 1992-2013. www.nature.com/scientificreports/ And also, in order to check information related to artificial intelligence algorithms, error statistical parameters have been used, and with this statistical metric, it can make a correct comparison between the algorithms used in this thesis in order to predict the underground water level. The results related to training, testing and the total data used (well data [1][2][3][4][5] to determine this valuable index are reported in Tables 4, 5 and 6, respectively (based on Eqs. (6) to (12)).
Statistical parameters
The results presented for different algorithms for test data, training and the whole data set are given in Tables 4, 5 www.nature.com/scientificreports/ DL-FF-KNN-ABC-MLP algorithms for predicting the underground water level. According to the results presented in the table reports, the performance accuracy of DL-FF-KNN-ABC-MLP for predicting this key parameter is very high. According to the reports shown in Tables 4, 5 and 6, it is RMSE Train = 0.0451, RMSE Test = 0.0597 and RMSE Total = 0.0701. Moreover, on the basis of the results presented and the comparison between STD, a good comparison of the performance accuracy of the algorithms can be made. In other words, the comparison for this term shows that the accuracy of the algorithms for predicting underground water level is ABC-KNN < FF-KNN < DL-FF-KNN-ABC-MLP. Figures 6, 7 and 8 show the cross-plots for the groundwater level to check the predicted data against the measured data, respectively, for the data related to training, testing and the entire data set. One of the important and practical statistical errors which can determine the algorithm performance is the use of the R-square statistical error. With these data, you can understand the accuracy of the functions and also check the data using graphical diagram. As shown in this figure, the R-square value for the DL-FF-KNN-ABC-MLP algorithm has the highest performance accuracy. Based on Figs. 6, 7 and 8, using the cross line, the performance accuracy of the predicted points against the measured points can be measured by using the distance of these points with the cross line. www.nature.com/scientificreports/
Development of the new model
This section of the article discusses the development and comparison of various models, including ABC-KNN, FF-KNN, and DL-FF-KNN-ABC-MLP (shown in supplementary file), for predicting groundwater levels in different wells in the same field. The study used information related to wells 6, 7, and 8 and first tested the models on wells 1-5, followed by checking the models on the remaining wells. Figure 9 provides a comparison of groundwater level predictions by year for wells 5 to 8 for the algorithms. The results show that the DL-FF-KNN-ABC-MLP algorithm outperformed the other algorithms in terms of performance accuracy for predicting groundwater levels in new wells in the same field. The study suggests that this algorithm could also be used in other fields and for predicting other key factors. The use of new information highlights the potential for this algorithm to be applied in various scenarios, and future researchers are encouraged to explore its application in other fields.
Figure 7.
Cross diagram for groundwater level prediction for test data related to information related to wells 1-5 (30% of this data set). www.nature.com/scientificreports/
Conclusion
In this study, 2112 data sets collected from 8 wells were used to predict the underground water level of Khuzestan region in Iran (Qale-Tol area). In order of prediction this parameter, three new artificial intelligence hybrid algorithms FF-KNN, ABC-KNN and new developed hybrid DL-FF-KNN-ABC-MLP algorithm have been used. Variable data which used as input data for hybrid machine learning, includes the flow rate of the river (which feeds fresh water sources), the level of underground water, precipitation and withdrawal of underground water by examining the delay of different times and also the level of underground water level during the years are 1992 to 2013. In order to developed these algorithms, 70% of the data related to 5 wells (wells 1-5) was used as training and 30% of this data was used as testing. The results show that the performance accuracy of the DL-FF-KNN-ABC-MLP algorithm is better than the other two algorithms used in this article. The novelty of this technique is that it first does classification by presenting the first block (combination of FF-DWKNN algorithm) and predicts with the second block (combination of ABC-MLP algorithm). The algorithm's ability to decrease data noise will be enabled by this feature. The results shown for this algorithm for the data related to testing, training and the entire data set are RMSE Train = 0.0451, RMSE Test = 0.0597 and RMSE Total = 0.0701. It is suggested that other scientists use this modified algorithm to determine important parameters in the prediction of other hydrological parameters. In addition, it is suggested that scientists use the term reservoir temperature and soil moisture effect to predict groundwater levels. Also, researchers can use this algorithm for big data with high noise. www.nature.com/scientificreports/
Data availability
Based on the correct academic requirement, corresponding author will let to available to dataset. | 5,481 | 2023-06-26T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
An N‐Ethyl‐N‐Nitrosourea (ENU) Mutagenized Mouse Model for Autosomal Dominant Nonsyndromic Kyphoscoliosis Due to Vertebral Fusion
ABSTRACT Kyphosis and scoliosis are common spinal disorders that occur as part of complex syndromes or as nonsyndromic, idiopathic diseases. Familial and twin studies implicate genetic involvement, although the causative genes for idiopathic kyphoscoliosis remain to be identified. To facilitate these studies, we investigated progeny of mice treated with the chemical mutagen N‐ethyl‐N‐nitrosourea (ENU) and assessed them for morphological and radiographic abnormalities. This identified a mouse with kyphoscoliosis due to fused lumbar vertebrae, which was inherited as an autosomal dominant trait; the phenotype was designated as hereditary vertebral fusion (HVF) and the locus as Hvf. Micro–computed tomography (μCT) analysis confirmed the occurrence of nonsyndromic kyphoscoliosis due to fusion of lumbar vertebrae in HVF mice, consistent with a pattern of blocked vertebrae due to failure of segmentation. μCT scans also showed the lumbar vertebral column of HVF mice to have generalized disc narrowing, displacement with compression of the neural spine, and distorted transverse processes. Histology of lumbar vertebrae revealed HVF mice to have irregularly shaped vertebral bodies and displacement of intervertebral discs and ossification centers. Genetic mapping using a panel of single nucleotide polymorphic (SNP) loci arranged in chromosome sets and DNA samples from 23 HVF (eight males and 15 females) mice, localized Hvf to chromosome 4A3 and within a 5‐megabase (Mb) region containing nine protein coding genes, two processed transcripts, three microRNAs, five small nuclear RNAs, three large intergenic noncoding RNAs, and 24 pseudogenes. However, genome sequence analysis in this interval did not identify any abnormalities in the coding exons, or exon‐intron boundaries of any of these genes. Thus, our studies have established a mouse model for a monogenic form of nonsyndromic kyphoscoliosis due to fusion of lumbar vertebrae, and further identification of the underlying genetic defect will help elucidate the molecular mechanisms involved in kyphoscoliosis. © 2018 The Authors. JBMR Plus is published by Wiley Periodicals, Inc. on behalf of the American Society for Bone and Mineral Research
Introduction
K yphosis is a common disorder of the vertebral column (1) that can occur in isolation, or in association with scoliosis in infants (2) and adolescents, (3) and osteoporosis in the elderly, (1) causing pain, decreased function and activity, (1) and increased risk of mortality in older women above the age of 65 years. (1,4) Kyphosis, scoliosis, or kyphoscoliosis, can occur at any age, secondary to other underlying developmental, musculoskeletal, neuromuscular, or spinal disorders, (5)(6)(7)(8)(9)(10) and may be part of complex disorders such as the CHARGE syndrome (coloboma of the eye, heart defects, atresia of the nasal choanae, retardation of growth and/or development, genital and/or urinary abnormalities, and ear abnormalities and deafness), (9,11) or occur as a nonsyndromic condition. Indeed, the most common forms of kyphosis and scoliosis in adolescents are nonsyndromic and include: Scheuermann disease (3) a form of nonsyndromic kyphosis, which affects >8% of the population (12) ; idiopathic scoliosis (IS), which affects approximately 2% to 3% individuals (13)(14)(15)(16) ; and congenital nonsyndromic scoliosis, which is reported to have a prevalence of approximately 0.5 to 1 per 1000 individuals. (9) Familial and twin studies have indicated a genetic basis for kyphosis (17)(18)(19)(20) and scoliosis, (21)(22)(23)(24)(25)(26)(27)(28) with likely genetic heterogeneity. However, studies aimed at defining the genetic abnormalities causing these spinal disorders have been hampered by their phenotypic and genetic heterogeneity, variable modes of inheritance, (8,29) and gene-environment interactions that may modify the phenotypic expression. (9) To facilitate these studies, we investigated the progeny of mice treated with the chemical mutagen N-ethyl-N-nitrosourea (ENU), (30) which is an alkylating agent that induces mutations in DNA at a frequency of 1 in every 1.5 megabases (Mb). (31) These mutations consist mainly of SNPs, and occasionally small indels, but not large structural variants. (31) Similar approaches using phenotypic assessments of mice with ENU mutations has successfully identified mouse models for hereditary human disorders, including skeletal dysplasias. (32)(33)(34) Here, we report an ENU mouse mutant with kyphoscoliosis due to lumbar vertebral fusion.
Ethics statement
All animal studies were carried out using guidelines issued by the Medical Research Council (MRC) (UK) in "Responsibility in the Use of Animals for Medical Research" (July 1993) and Home Office Project License Number 30/2433. Experiments were also approved by the MRC Harwell ethics committee.
Generation of mutant mice
Male C57BL/6J mice were treated with ENU and mated with untreated C3H/HeH female mice, (31) and the resulting progeny were screened at 12 weeks of age for autosomal dominant phenotypes. (34) In vitro fertilization was used to generate progeny, using methods previously described. (35) Mice were fed an expanded rat and mouse no. 3 (36) Anaesthetized mice were assessed by digital radiography at 26 kV for 3 s using a Faxitron MX-20 digital X-ray system (Faxitron X-ray Corporation, Lincolnshire, IL, USA) (34) and DXA using a Lunar PIXImus densitometer (GE Healthcare, Chalfont St Giles, UK), as reported. (37) X-ray images were processed using the DicomWorks software (http://www.dicomworks.com/) and DXA images were processed using the PIXImus software. (37) Micro-computed tomography analysis Formalin-fixed skeletons and dissected bones were examined using a micro-computed tomography (mCT) scanner (model 1172a; Skyscan/Bruker, Kontich, Belgium) at 50 kV and 200 mA utilizing a 0.5-mm aluminum filter and a detection pixel size of 4.3 mm 2 (tibias and lumbar vertebrae) and 17.4 mm 2 (spinal columns and rib cages). For each specimen, images were captured every 0.7 degrees through a 360-degree rotation. The lumbar vertebrae were scanned separately to measure trabecular bone, (38) using a detection pixel size of 4.3 mm 2 , and images were scanned every 0.7 degrees through a 180-degree rotation. (34) Scanned images were reconstructed using Skyscan NRecon software and analyzed using the Skyscan CT analysis software (CT Analyser v1.8.1.4; Skycan). (34) Total bone volume (mm 3 ) and bone mineral density (g/cm 3 ) were measured over the entire volume of the bone (CT Analyser v1.8.1.4; Skycan). Trabecular bone volume as proportion of tissue volume (BV/TV, %), trabecular thickness (Tb.Th, mm  10 À2 ), trabecular number (Tb. N, mm À1 ), and structure model index (SMI) were assessed for the first and second lumbar vertebrae, using the CT analysis software. (34) Intact vertebral columns were modeled using Skyscan CT volume software and images captured (CT Vol: Realistic 3D-Visualization v1.11.0.2; Skyscan). Cross-sections of lumbar vertebrae and tibias were generated using Skyscan CT analysis software.
Histology
Dissected vertebrae were fixed in 10% formalin, decalcified in formical-4 (Decal Chemical Corporation, Suffern, NY, USA) for 3 days before embedding in paraffin wax. (34) Sections (3 to 4 mm) were cut using a Leica Microsystems Microtome (Leica Microsystems, Milton Keynes, UK) and stained with hematoxylin and eosin (H&E). Slides were examined using a Leica microscope model DM4000B (Leica Microsystems) and images captured using a QImaging camera model 10-RET-OEM-F-CLR-12 (QImaging, Surrey, BC, Canada). (34) Plasma biochemistry Blood samples were collected from the lateral tail vein of mice that had fasted for 4 hours. Plasma samples were analyzed, using a Beckman Coulter AU680 semi-automated clinical chemistry analyzer (Beckman Coulter, High Wycombe, UK), for total calcium, phosphate, and albumin concentrations, and alkaline phosphatase activity, as described. (39) Plasma calcium was adjusted for variations in albumin concentrations using the formula: ((albumin-mean albumin) Â 0.02) þ calcium), as described. (39) Statistical analysis Statistical analysis was performed using Microsoft Excel 2010 (Microsoft Corp., Redmond, WA, USA) and GraphPad Prism (GraphPad Software, Inc., La Jolla, CA, USA). Significance of differences was assessed by unpaired two-tailed Student's t test, or Fisher's exact test (37) ; p < 0.05 was considered significant.
Mapping studies, genome sequencing, Sanger DNA sequence analysis and amplification-refractory mutation system-PCR Genomic DNA was extracted from ear or tail biopsies as described. (34) and amplified by PCR for genomewide mapping using a panel of 91 SNP loci arranged in chromosome sets, and the products were analyzed by pyrosequencing. (40) Wholegenome sequencing was undertaken using DNA from one affected HVF mouse and the two parental strains (C57BL/6J and C3H/HeH), to generate a library, and 100-nucleotide (nt) pairedend sequencing generated using an Illumina HiSeq 2000 sequencer as described. (35) Sequencing data was analyzed using a previously described pipeline. (35) Briefly, sequences were aligned to the mouse reference genome NCBIM38/mm10 using the Burrows-Wheeler Aligner. SNPs and small indels were detected using the Genome Analysis Toolkit (GATK) unified GenotypeCaller (41) with dbSNP version 137 as the background SNP set and default parameters. Only SNPs with mapping quality >100 and read depth >3 ("high confidence SNPs") were considered further, and these were functionally annotated using next-generation sequencing (NGS)-SNP. (35) High-confidence SNPs were filtered against a precompiled list found in 17 inbred strains from the Mouse Genome Project (42) and from the two parental strains. DNA sequence analysis was undertaken by PCR amplification using gene-specific primers for individual exons and adjacent splice sites and Taq PCR Mastermix (Qiagen, Crawley, UK), and the DNA sequence of the PCR products determined using BigDye terminator reagents and an ABI 3100 sequencer (Life Technologies, Carlsbad, CA, USA). (34) Amplification-refractory mutation system (ARMS)-PCR was undertaken for further studies of a variant in Map3k7 in 24 mice, as described. (43)
Results
Identification of HVF mice and inheritance as an autosomal dominant trait Radiography analysis of 12-week-old progeny derived by mating of an ENU-mutagenized male C57BL/6J mouse with a wild-type (WT) C3H/HeH female mouse identified a female mutant with fused lumbar vertebrae (L 2 and L 3 ) and kyphosis (Fig. 1A). Normal mating of this affected female mouse with WT C3H/HeH male mice was repeatedly unsuccessful, because the back deformities, due to the kyphoscoliosis, hindered mounting. In vitro fertilization, utilizing sperm from a WT C3H/HeH male mouse, was therefore used to generate progeny for inheritance testing, and radiography analysis of the 52 (25 male and 27 female) progeny at 12 weeks revealed that 23 (eight males and 15 females) (ie, 44%) were affected with fusion of two to four lumbar vertebrae, consistent with an autosomal dominant inheritance. The phenotype was designated as hereditary vertebral fusion (HVF), and the locus as Hvf. HVF was associated with kyphosis in 30% of affected mice, scoliosis in 17%, and kyphoscoliosis in 30% of mice (Fig. 1B, C). The affected mice did not have dysmorphology or other radiological abnormalities and were therefore representative of nonsyndromic kyphoscoliosis. In addition, inspection of these mice did not detect any gross morphological abnormalities at earlier ages. mCT scanning analysis confirmed the occurrence of the spinal abnormalities of kyphosis ( Fig. 2A) and scoliosis (Fig. 2B) associated with fusion of the lumbar vertebrae (Fig. 2C), which was consistent with a pattern of blocked vertebrae due to failure of segmentation. Histology of the lumbar vertebrae revealed irregularly shaped vertebral bodies and displacement of intervertebral discs and ossification centers in HVF mice (Fig. 2D). The severity of the HVF phenotype was similar in males and females and the proportion of males that were affected (32%) and females that were affected (56%) did not differ significantly (Fisher's exact test, p ¼ 0.103), consistent with the autosomal dominant inheritance of HVF. Analysis of plasma calcium, phosphate, and albumin concentrations and alkaline phosphatase activity did not reveal any differences between HVF mice and unaffected littermates (data not shown).
Phenotypic assessment of HVF mice
Further phenotype analysis using DXA and mCT analysis was undertaken. DXA was performed on 30 mice, which consisted of 20 WT littermates (13 males and seven females) and 10 HVF mice (three males and seven females), aged 12 weeks. Body weight was significantly reduced by >25% in the HVF mice (mean AE SD of WT versus HVF females ¼ 28.66 AE 1.58 g versus 21.03 AE 0.78 g, p < 0.001; and WT males ¼ 35.02 AE 2.43 g, with each of the three HVF male mice being À5 SD below the WT mean, and this was associated with a >20% decrease in lean mass (WT versus HVF females ¼ 20.62 AE 1.68 g versus 16.15 AE 0.78 g, p < 0.001; and WT males ¼ 27.14 AE 1.87 g with each of the three HVF male mice being À4 to À5 SD below the WT mean) and a >40% decrease in fat mass (WT versus HVF females ¼ 6.27 AE 1.18 g versus 3.66 AE 0.46 g, p < 0.001; and WT males ¼ 7.9 AE 2.7 g with each of the three HVF male mice being À1 to À2 SD below the WT mean). Whole-body bone mineral density (BMD) was similar in female HVF mice compared to WT mice (WT versus HVF ¼ 61.1 AE 2.4 mg/cm 2 versus 58.4 AE 2.3 g/cm 2 ) and $8% lower in male HVF compared to WT (WT versus HVF ¼ 61.0 AE 0.9 g/cm 2 with each of the three HVF male mice being À3 to À6 SD below the WT mean). mCT analysis was undertaken on the lumbar vertebrae from 11 mice, which comprised five WT females and six HVF females (Fig. 3). Cross-sectional analysis of the vertebral column from WT and HVF mice (Fig. 3A) revealed HVF mice to have generalized disc narrowing, fusion of lumbar vertebrae, and displacement with compression of the neural spine adjacent to regions with dorsal disc narrowing. Cross-sectional analysis of individual lumbar vertebrae revealed the HVF mutant to have distorted transverse processes, and wider neural spine and bone formation in the spaces between L 3 , L 4 , and L 5 (Fig. 3B). In addition, L 3 to L 5 from the HVF mutant had distorted neural canals, and the lower ends of L 2 to L 5 had an abnormal orientation due to scoliosis in this region (Fig. 3B). Additional quantitative cross-sectional analysis of the first and second lumbar vertebrae (Fig. 3B) from WT (n ¼ 13 males and n ¼ 13 females) and HVF mice (n ¼ 5 males and n ¼ 9 females) revealed no significant differences in trabecular bone volume, trabecular thickness, or bone density. X-ray, DXA, and mCT analyses did not reveal any other bone phenotypes apart from the vertebrae.
Mapping of the Hvf locus to chromosome 4A3 and genome sequencing Genomewide mapping studies using 97 SNPs and DNA samples from 23 HVF (15 females and eight males) affected mice mapped the Hvf locus to chromosome 4. HVF was found to cosegregate with C57BL/6 alleles identified by nine SNPs from chromosome 4 ( Fig. 4) (LOD score > þ6. 5), and no other locus was found to segregate with the HVF phenotype. An examination of the haplotypes helped to further localize the Hvf locus; 16 HVF mice had inherited non-recombinant chromosome 4 haplotypes, whereas seven HVF mice had inherited recombinant haplotypes. The recombinant haplotypes in three HVF mice helped to define the critical 5-Mb interval (Fig. 4); because two HVF mice had recombinants between the disease locus (Hvf) and the centromeric SNPs, including rs4138316, and another mouse had a recombination between Hvf and the telomeric loci, including the microsatellite locus at map position 28.2 Mb. These results locate Hvf to a 5-Mb interval flanked centromerically by rs4138316 and telomerically by the microsatellite locus at the 28.2-Mb position. This interval could not be refined further because strain-specific polymorphic loci within this region are not available. This interval contains 46 genes or likely expressed transcripts which include: the nine known protein coding genes (kelch-like 32 (Fig. 4); two processed transcripts; three microRNAs; five small nuclear RNAs; three large intergenic noncoding RNAs; and 24 pseudogenes (Supporting Table 1). An analysis of these genes and likely expressed transcripts did not reveal any links with the molecular pathways that are known to cause kyphosis or patterning defects, such as the Notch pathway, and we therefore carried out whole-genome sequencing. Approximately 95% of the 5-Mb candidate interval was covered by sequencing data, with an average depth of 11Â. This did not identify any nucleotide variants within the coding regions and splice junctions any of the nine coding genes, the two processed transcripts, three microRNAs, five small nuclear RNAs, three large intergenic noncoding RNAs, or 24 pseudogenes within the 5-Mb candidate interval, or up to 2 Mb telomeric or centromeric to the candidate interval. This was confirmed by Sanger DNA sequence analysis of the coding regions, splice junctions and 5 0 untranslated regions of the nine known coding genes, and of the five small nuclear RNAs, which did not reveal any abnormalities. Six intergenic and three intronic SNPs were present within the candidate region; however, none of these SNPs were found to be within conserved sequences between human, rat, and chimp genomes (Vista Genome Browser; http:// pipeline.lbl.gov/cgi-bin/gateway2), or within any regulatory regions in ENCODE (https://www.encodeproject.org/). Genome sequencing identified a novel, high-confidence coding variant in the mitogen-activated protein kinase kinase kinase 7 (Map3k7) gene, which was located $3.8 Mb telomerically of the critical interval. This variant, which was a C>A transversion at c.702 of Map3k7 that predicted the missense mutation Ala179Asp, was studied, using ARMS-PCR, for cosegregation with the HVF phenotype in 24 mice (nine HVF and 15 unaffected littermates). This Map3k7 variant did not cosegregate with the HVF phenotype, because five of 15 unaffected mice were found to harbor the variant and one of nine affected HVF mice did not harbor the variant (data not shown), consistent with this Map3k7 variant being located outside the critical interval. Thus, the causative mutation for HVF is likely to involve the regulatory region of one the 46 genes within the critical interval, or possibly a regulatory region that acts over a longer range to alter the expression of a gene outside the critical interval.
Discussion
Our study has established an ENU-induced mutant mouse model, HVF, for an autosomal dominant form of nonsyndromic kyphoscoliosis that is associated with fusion of lumbar vertebrae, and reductions in lean mass and fat mass, and this has some similarities to a form of human congenital scoliosis referred to as block vertebrae. (9) Congenital scoliosis in man can be classified into two main patterns of presentation, namely: (i) failure of formation due to the presence of hemivertebrae and wedged vertebrae; and (ii) failure of segmentation which consists of unilateral unsegmented bars with and without hemivertebrae, and bilateral failure of segmentation (block vertebrae). (9) The HVF mouse model resembles the failure of segmentation (block vertebrae) pattern (Figs. 1 and 2; Table 1) which has been reported in $3% of 251 patients with congenital scoliosis, (44) and to have a prevalence of approximately 0.5 to 1 per 1000 individuals. (9) Thus, identifying the genetic defect causing HVF may help to identify a component in the complex pathways regulating vertebral formation, which is likely to involve the expression of multiple genes. Indeed, studies of other mouse mutants presenting with abnormalities in the axial skeleton including tail kinks, kyphosis, and scoliosis have revealed multiple conserved chromosomal loci harboring genes that are likely to be candidates for human kyphosis and scoliosis (8,45) ; however, it is clear from their polygenic nature that no single mouse mutant identified thus far can serve as a complete model for investigating these disorders.
The phenotype of HVF mice also has some similarities to two other rodent models; these are the Ishibashi rat (ISR) model which arose spontaneously during inbreeding of agouti rats originating from breeding of a Wistar female rat to a wild rat, (46) and the congenital scoliosis mouse model (CSmo) developed by exposing pregnant DBA/1J mice to 600 ppm of carbon monoxide on day 9 of gestation (47) (Table 1). Thus, the ISR model has similarities to HVF mice in having kyphosis, that is associated with segmentation defects affecting mainly the lumbar area, as well as narrowing of intervertebral spaces, irregularity of adjacent ends of vertebral bodies, and wedging and complete bony fusion of adjacent vertebral bodies. (48,49) However, the ISR model is suggested to have an autosomal recessive inheritance (50) and although reduced expression of Hox10 and Hox11 have been reported, (49) a mutation has not yet been identified. Details of body weight, lean mass, fat mass, and BMD content were not reported in the ISR model. (48) The CSmo model has few similarities to the HVF mice other than scoliosis which may involve the lumbar vertebrae (47) ; however, the CSmo model has scoliosis in association with hemivertebrae and bars that may affect cervical, thoracic, or lumbar vertebrae. (47) Details of body weight, lean and fat mass, and BMD were not reported in the CSmo model. (47) Three other knockout mouse models with an allelic deletion of genes encoding members of the Notch signaling pathway have been reported to develop vertebral defects (Table 1). These mice have a heterozygous loss of the mesoderm posterior protein 2 (Mesp2 þ/À ), hairy and enhancer of split 7 (Hes7 þ/À ), and delta-like 1 (Dll1 þ/À ) genes, and all three mouse models have been reported to develop autosomal dominantly inherited vertebral defects with low penetrance. (51) However, the penetrance and severity of these defects, in embryos with an allelic deletion (ie, Mesp2 þ/À , Hes7 þ/À , or Dll1 þ/À ), were increased by exposure to hypoxic conditions, and the resulting defects included missing pedicles and rib abnormalities. (51) These findings indicate that the Mesp2 þ/À , Hes7 þ/À , and Dll1 þ/À mice do not have any similarities to the HVF mouse model, which develops autosomal dominant kyphoscoliosis with high penetrance without any requirement for environmental challenge such as hypoxia. Null mutations of Tbx6, a key regulator of the Notch signaling pathway, cause congenital scoliosis in humans and rats. (27,(52)(53)(54) However, the vertebral defects may occur in the cervical, thoracic, or lumber spine, in contrast to Hvf mice, in which the defects occurred only in L 2 to L 5 vertebrae. Thus, our results provide a new rodent model for hereditary lumbar scoliosis. TBX6 is expressed throughout the developing axial skeleton, with differential spatial expression patterns at different time-points, as vertebral segmentation proceeds in a cranial to caudal direction. (55) We hypothesize that a noncoding mutation is the cause of the Hvf phenotype, likely through mutation of a promoter region. The cellular effect of the mutation is therefore likely to be one of altered regulation of a pathway, either in a time-specific or space-specific manner, rather than disruption of a whole pathway, as is the case with TBX6 null mutations. Thus, if cranial-caudal development of the spine is either terminated at a different time or delayed, then this may be more likely to induce defects in the caudal part of the spine rather than in the cranial regions or throughout, or if spatial specificity is disrupted, then a small specific set of vertebrae may be affected. The time-sensitive nature of vertebral segmentation is demonstrated by Mbtps1 conditional knockout mice, in which Mbtps1 was knocked-out at embryonic day 8.5 (E8.5) in the caudal region of the embryo. These mice developed fusion of the lower lumbar vertebrae, in a similar pattern to Hvf mice. (56) Our studies mapped the Hvf locus to a 5-Mb interval on chromosome 4A3 (Fig. 4) and the syntenic region (23.2 to 28.2 Mb) in humans (94.2 to 98.8 Mb) is on chromosome 6q16.1. Genetic mapping studies in humans have revealed loci for scoliosis and/or kyphosis on chromosomes 8, 17p11.2, and 19q13.3, and these do not correspond to the syntenic region to mouse chromosome 4A3, thereby indicating that further studies of HVF mice may also identify a gene for kyphoscoliosis in Fig. 4. Haplotype analysis using chromosome 4 SNPs in mice affected with HVF. The Hvf locus is inherited with C57BL/6 alleles and is located in a 5-Mb interval between the centromeric SNP rs4138316 and the telomeric microsatellite locus at the 28.2-Mb position. The region contains nine protein coding genes that were studied for mutations, 24 pseudogenes, two processed transcripts, three large intergenic ncRNAs, five snRNAs, and three miRNAs. ncRNA ¼ noncoding RNA; snRNA ¼ small nuclear RNA; miRNA ¼ microRNA. Table 1. Phenotypic Features of HVF Mice, CSmo, Heterozygous Knockout Mouse Models (Mesp2 þ/À , Hes7 þ/À , and Dll1 þ/À ), ISR, and CShu.
humans. Indeed, kyphosis and/or scoliosis are likely to have the involvement of many different genes as well as representing common clinical endpoints for a number of diseases that have different pathogenetic mechanisms, (8,45) such as the CHARGE syndrome, which is associated with later-onset scoliosis in more than 60% of patients. (11) The CHARGE syndrome is due to mutations of the chromodomain-helicase-DNA-binding protein 7 (CHD7) gene (11) and a CHD7 polymorphism (rs4738824, chromosome 18q12) has been associated with susceptibility to idiopathic scoliosis in a group of families of European descent. (15) However, CHD7 gene abnormalities have not been identified in other studies of scoliosis pedigrees, thereby indicating the likely involvement of other genes in the etiology of the idiopathic forms of kyphosis and/or scoliosis.
Analysis of the genes within the Hvf candidate interval (Supporting Table 1) did not reveal any links with molecular pathways known to be involved with kyphosis, patterning defects, or vertebral fusion, such as the Notch pathway. However, the candidate interval contains Ndufaf4, which encodes a protein that is required for assembly of complex I of the mitochondrial respiratory chain, and affected individuals from a family with isolated mitochondrial complex I deficiency due to homozygous mutations in NDUFAF4 have been reported to have kyphosis; however, this occurred in conjunction with severe metabolic acidosis, encephalopathy, and death in infancy, whereas heterozygous parents and siblings were unaffected. (57) HVF mice did not have premature death or encephalopathy; moreover, the HVF phenotype was inherited in an autosomal dominant manner, thereby indicating that HVF is unlikely due to defects of Ndufaf4. Furthermore, mice with a heterozygous deletion of Ndufs4, that encodes one of the proteins within complex I, and therefore has isolated complex I deficiency, were indistinguishable from WT mice, with no kyphoscoliosis reported. (58) The candidate interval also contains three microRNAs; however, these are all novel microRNAs whose targets are unknown.
We also investigated Map3k7, a coding variant (Ala179Asp) that was located outside the critical interval, because Map3k7 (also known as TGF-b-activated kinase 1 [TAK1]) is a member of the signaling pathway that links TGF-b and bone morphogenetic protein (BMP) with activation of the p38 MAPK pathway, which plays a critical role in bone growth. However this Map3k7 variant did not cosegregate with the HVF phenotype, and involvement of this missense Map3k7 variant in causing HVF is unlikely, and consistent with other observations from mutant mouse and human disease studies. Thus, homozygous Map3k7 knockout mice are embryonically lethal, and heterozygous Map3k7 knockout mice have no phenotype. (59,60) Moreover, osteoblast-specific Map3k7 knockout mice developed clavicular hypoplasia and delayed closure of the fontanelles, similar to the human disorder of cleidocranial dysplasia, reduced trabecular bone, and a moderate decrease in body weight, but did not develop vertebral defects. (61) Conversely, osteoclast precursorspecific Map3k7 knockout mice displayed skull overgrowth and increased trabecular bone, but again did not develop any vertebral abnormalities. (62,63) Furthermore, heterozygous mutations in MAP3K7 in humans cause the syndromic skeletal disorders of cardiospondylocarpofacial syndrome and frontometaphyseal dysplasia. (64,65) These studies illustrate that Map3k7 plays an important role in bone development, and it is possible that the HVF mice harbor a noncoding mutation within the critical interval that alters the regulation of Map3k7 specifically in developing vertebrae. Indeed, a recent study has suggested that familial idiopathic kyphoscoliosis/scoliosis in a series of seven families, in whom a critical interval of 3.5 Mb on chromosome 5p was previously defined, may be due to a noncoding mutation within a regulatory region that affected the expression of an unknown target gene(s). (66) Because there are no known links between any of the genes within the Hvf candidate interval and pathways involved with kyphosis, patterning defects, or vertebral fusion, the genetic defect causing the HVF phenotype may reveal novel biological mechanisms involved with these processes.
In humans, a number of inherited diseases, with autosomal dominant and recessive, and X-linked inheritances, have been described to be due to noncoding mutations. These include triphalangeal thumb/preaxial polydactyly (autosomal dominant with variable penetrance), due to mutations in the ZPA regulatory sequence, a long-range cis-acting regulator of Sonic Hedgehog (SHH) gene expression (67)(68)(69) ; and autosomal recessive isolated pancreatic agenesis due to noncoding mutations downstream of pancreas-specific transcription factor 1a (PTF1A). (70) Such mutations are frequently large deletions or duplications, but may also consist of single-nucleotide mutations, similar to those most frequently induced by ENU, as found in isolated pancreatic agenesis, and recently described in the promoter region of ovo-like zinc finger 2 (OVOL2) in families with autosomal-dominant corneal endothelial dystrophies. (71) Mutations upstream of PTF1A were found to reduce the expression of PTF1A, whereas mutations in the OVOL2 promoter were able to induce OVOL2 expression, likely leading to aberrant ectopic OVOL2 expression in the developing cornea. (70,71) Either of these two molecular mechanisms could account for the dominant disease presentation in Hvf mice, eg, through a dosage effect due to a reduction in the level of a critical transcript, or through ectopic induction of a transcript, and either of these may be time and/or spatially specific. Future work to identify the Hvf causative mutation will focus on analyses of the six intergenic and three intronic noncoding variants in transcriptional assays, such as luciferase assays, similarly to those undertaken for PTF1A and OVOL2-associated noncoding variants. (70,71) To determine the specific role of the variants in somitogenesis, these assays may need to be undertaken in a somite cell-line, such as cells derived from pluripotent stem cells, (72) or in bone-specific cells such as chondrocytes.
In summary, our study has established an ENU-induced mouse model for autosomal-dominant congenital scoliosis and identification of the causative genetic defect will help in further elucidating the molecular mechanisms associated with congenital scoliosis due to segmentation defects.
Disclosures
All authors state that they have no conflicts of interest. | 6,909 | 2018-03-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Interference of Oxidative Metabolism in Citrus by Xanthomonas citri pv citri
Citrus are one of the most important fruit crops grown worldwide. Among the pathogens that cause disease of Citrus sp. and closely related genera, Xanthomonas citri pv citri (Xcc) causes citrus canker, a devastating disease that is found in 30 countries worldwide and has caused significant economic loss (Del Campo et al., 2009; Rigano et al., 2010). The principle mode of transmission of Xcc is through heavy rain and high wind events and thus the disease is most severe in regions that experience occasional tropical storms and hurricanes (Graham et al., 2004). Citrus canker outbreaks in Florida, for example, have contributed to a decline in acreage of grapefruit to 61 % by 2009 compared to the acreage in 1994 (Anonymous, 2009). Severe canker can cause fruit drop and even tree death (Graham et al., 2004). Further economic losses can be incurred through restricted movement of infected fruits especially to citrus growing regions where canker is not found (Schubert et al., 2001).
Introduction
Citrus are one of the most important fruit crops grown worldwide.Among the pathogens that cause disease of Citrus sp. and closely related genera, Xanthomonas citri pv citri (Xcc) causes citrus canker, a devastating disease that is found in 30 countries worldwide and has caused significant economic loss (Del Campo et al., 2009;Rigano et al., 2010).The principle mode of transmission of Xcc is through heavy rain and high wind events and thus the disease is most severe in regions that experience occasional tropical storms and hurricanes (Graham et al., 2004).Citrus canker outbreaks in Florida, for example, have contributed to a decline in acreage of grapefruit to 61 % by 2009 compared to the acreage in 1994 (Anonymous, 2009).Severe canker can cause fruit drop and even tree death (Graham et al., 2004).Further economic losses can be incurred through restricted movement of infected fruits especially to citrus growing regions where canker is not found (Schubert et al., 2001).
The commercial and dietary importance of citrus and the severity of canker have led to extensive research to identify resistant genotypes that would serve as models of study as well as germplasm for crop improvement.Most commercial citrus are within the Citrus genus, however closely related genera are capable of hybridizing with Citrus sp. and thus have been included in studies to evaluate variation in plant defense to canker.Citrus genotypes can be classified into four broad classes based on susceptibility to canker (Gottwald, 2002).The most highly-susceptible commercial genotypes are 'Key' lime [C.aurantifolia (Christm.)Swingle], grapefruit (C.paradisi Macfad.),lemon (C.limon), and pointed-leaf Hystrix (C.hystrix).Susceptible genotypes include limes (C.latifolia), sweet oranges (C.sinensis), trifoliate orange (P.trifoliata) citranges and citrumelos (P.trifoliata hybrids), and bitter oranges (C.aurantium).Resistant genotypes include citron (C.medica L.) and mandarins (C.reticulata Blanco).Highly resistant genotypes include Calamondin [Citrus margarita (Lour.)]and kumquat [Fortunella margarita (Lour.)Swingle].The high degree of resistance to Asiatic citrus canker by calamondin, kumquat, and Ichang papeda (C.ichangenesis) has been noted in the field (Reddy, 1997;Viloria et al., 2004).events in the pathogenesis of Xcc in citrus has been described (Burnings and Gabriel, 2003).Following artificial inoculation, the bacterial cells occupy intercellular spaces and begin to divide by the end of the first day after inoculation.Once a critical population threshold is reached, which is about 1 x 10 3 to 1 x 10 4 bacteria per canker lesion, a quorum sensing mechanism (da Silva et al., 2002) is likely the impetus that turns on pathogenicity factors (Bassler, 1999) that includes Rpf encoding genes (Slater et al., 2000).Within 2 days after inoculation, Xcc attaches to plant cell walls via specialized proteins called "adhesins" (Lee and Schneewind, 2001) by hrp (hypersensitivity response and pathogenicity) pili or by type IV pili as observed during xanthomonas pv.malvaceraum-Gossypium hirsutum interaction (Burnings and Gabriel, 2003).Once attached, Xcc uses it T3S system to turn on additional pathogenicity genes (Pettersson et al., 1996) and inject pathogenicity factors into the cell including Avr, Pop and Pth proteins such as PthA (Brunings and Gabriel, 2003).PthA presumably stimulates plant cell division and enlargement within 3 days after inoculation that reaches a maximum by 7 days after inoculation (Lawson et al., 1989).Cell enlargement, presence of the bacteria in the apoplast, and its production of hydrophilic polymers causes watersoaking symptoms starting 4 days after inoculation (Duan et al., 1999).The maximum bacterial populations occur at 7 days after inoculation (Khalaf et al., 2007) and about 8 days after inoculation the epidermis ruptures allowing bacteria to egress to the surface (Brunings and Gabriel, 2003).By 10-14 days after inoculation, necrosis develops in the infected areas (Duan et al., 1999) and by 21 days after inoculation leaves abscise (Khalaf et al., 2007).
Oxidative response of plants to pathogens
The hypersensitive response (HR) involves a rapid, widespread change in plant cell metabolism intended to alter the chemistry of the region within and surrounding the infected area in order to impact the pathogen by deterring its metabolism, isolating it within the infected region, and even directly killing it (Lamb and Dixon, 1997).As part of the response, programmed cell death (PCD) of plant cells within and adjacent to the infected region is often elicited (Lamb and Dixon, 1997).The HR includes alteration of oxidative metabolism to produce reactive oxygen species (ROS) that promote PCD, sicken pathogen metabolism, and promote changes in cell wall chemistry that isolate the pathogen (Azvedo et al., 2008;Kuzniak and Urbanek, 2000;Lamb and Dixon, 1997).In the case of citrus canker, PCD is evident around infection sites by chlorosis, with the chlorotic rings widening as the canker spreads radially from the infection point and along the plane of the leaf blade (Burnings and Gabriel, 2003).
Reactive oxygen species produced during HR and PCD in response to pathogens include superoxide radicals (O2˙-), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH˙) (Chen et al., 2008;Lamb and Dixon, 1997;Wojtaszek, 1997).Production of ROS occur during normal metabolism of uninfected plants and maintained at low concentrations by several enzymatic and non-enzymatic pathways.In response to infection by pathogens, concentrations of ROS are increased and compartmentalized during HR and PCD via several pathways mediated by signals including salicylic acid, nitrous oxide, and the MAP kinase cascade mechanism (Durrant and Dong, 2004;Vlot et al., 2009) to alter the chemistry within and surrounding the infection site (Mittler, 2002).
One important ROS is H 2 O 2 , the concentration of which has been correlated with disease resistance (Lamb and Dixon, 1997;Mittler et al., 1999).H 2 O 2 concentrations can increase very rapidly from 0 to 6 days after inoculation during plant-bacterial pathogen interactions (Wojtaszek, 1997) (Gay and Tuzun, 2000).
Based on their metal co-factor, SODs can be classified into three categories: iron SOD (Fe-SOD), manganese-SOD (Mn-SOD), and copper-zinc SOD (Cu-Zn-SOD), each of which is specifically compartmentalized in the cell (Alscher et al., 2002).Fe-SOD is located in the chloroplasts, Mn-SODs in the mitochondria and peroxisomes, and Cu-Zn-SOD in the chloroplast, cytosol, and possibly in the apoplast (Alscher et al., 2002).The various SODs play important roles in plant/pathogen interactions.Fe-SOD, for example, appears to be involved in the early signaling with H 2 O 2 by plant cells after infection (Mur et al., 2008;Zurbriggen et al, 2009).Mn-SOD has been reported to play an important role in early apoptotic events during PCD in Gossypium hirsutum-Xanthomonas campestris pv.malvecearum interaction (Voludakis et al., 2006).However, Kukavica et al. (2009) showed the existence of a cell wall bound Mn-SOD that generated OH . in pea roots and probably facilitates cell elongation.
Comparative analysis of oxidative metabolism in Xcc resistant and susceptible genotypes
Recent studies on various Citrus sp. and closely related genera have increased our understanding of deficiencies in oxidative metabolism in susceptible genotypes.The most commonly studied resistant genotype is kumquat (Fortunella margarita (Lour.)Swingle).The kumquats have been characterized as canker resistant based on fewer canker lesions per leaf and reduced internal bacterial populations per lesion compared to susceptible genotypes (Khalaf et al., 2007;Viloria et al., 2004).Resistance of kumquat has been exhibited in hybrids with Citrus sp.such as 'Lakeland' limequat, a cross between the highly Xcc-susceptible 'Key' lime and kumquat, which has demonstrated greater canker resistance than 'Key' lime alone under field conditions (Viloria et al., 2004).Furthermore, the Asiatic strain of canker (Canker A) has been shown to reach populations densities consistent with a compatible reaction (Stall et al., 1980) and the lower concentrations of Xcc in kumquat indicates a disease resistance mechanism (Viloria et al., 2004).Although oxidative metabolism is complex, recent research has focused on comparing kumquat resistant and susceptible Citrus genotypes on their H 2 O 2 metabolism in part due to its importance in cell signaling and its involvement in cell wall chemistry during growth and plant defense.
The basal antioxidant metabolism has been shown to vary in different citrus genotypes (Kumar et al., 2001a) which relate to their fundamental differences in resistance.Kumquat, for example, was shown to have higher total SOD activity in kumquat than grapefruit and sweet orange, yet H 2 O 2 was lower in kumquat in part because of higher CAT activity.These fundamental differences in basal metabolism are the starting point for changes in oxidative metabolism when challenged with Xcc.
Oxidative metabolism in canker-resistant kumquat
Using an Asiatic strain of canker (Canker A) and infiltration of kumquat leaves, Kumar et al., (2011c) showed that the Xcc populations peaked 4 days after inoculation and declined thereafter.Chlorosis was evident the first day after inoculation and persisted throughout the infection process (Fig. 1).Water soaking was delayed until 4 days after inoculation.H 2 O 2 concentrations increased rapidly 1 day after inoculation to almost 2x the controls, about 10 ml, from 6 to 8 days after inoculation and declined slightly thereafter but remained above the controls throughout the infection process (Figs. 1 and 2).The pattern of Xcc population and H 2 O 2 concentrations is consistent with the latter's role in impeding bacterial growth and promoting PCD, which occurred from 10 to 12 days after inoculation.The rapid necrosis in the localized region of the infected kumquat tissue by Xcc has been suggested to be consistent with a hypersensitive response (HR) and induced PCD (Khalaf et al., 2007).Lipid peroxidation was shown to increase rapidly and remain several times higher than the controls in kumquat-Xcc interaction (Kumar et Keeping in mind that total SOD activity in kumquat-Xcc interaction increased and remained high throughout pathogenesis, the decline in Fe-SOD activity beyond the first day after inoculation had to be replaced by a different form of SOD that would dominate during the second peak of total SOD activity.Kumar et al., (2011e) found that Mn-SOD activity increased from 2x to 3x that of the control starting 2 days after inoculation and reached a maximum during the second peak of total SOD activity from 6 to 8 days after inoculation.The prolonged, elevated Mn-SOD activity indicated that this class of SOD was responsible for the majority of total SOD activity throughout the entire pathogenesis process.Mn-SOD is generally considered to be limited to mitochondria and peroxisomes (Alscher et .One SOD reported to be located in plant apoplasts is Cu-Fe-SOD (Alscher et al., 2002) and in kumquat infected with Xcc, a putative Cu-Fe-SOD gene was up-regulated 2 to 7 days after inoculation (Khalaf et al., 2007), however activity of this SOD isoform was not detected (Kumar et al., 2011e).Mn-SOD was also suggested to be involved in cell elongation (Kukavica et al., 2009), which is one of the early events during canker development (Khalaf et al., 2007).Kukavica et al. (2009) proposed a novel role for cell wall bound Mn-SOD that assists in POD-mediated cell elongation by producing OH . in the apoplast.Although the formation of OH .during kumquat-Xcc is not verified, its formation is consistent with plant defense considering its high toxicity to Xanthomonas spp.(Vattanaviboon and Mongkolsuk, 1998).Nevertheless, production of O 2 .
-and conversion of it plus H 2 O 2 to OH . in kumquat-Xcc interactions needs to be determined.
In summary, kumquat respond to Xcc by promoting higher concentrations of H 2 O 2 through temporal and qualitative changes in enzymes involved in its synthesis and dismutation.H 2 O 2 is produced initially through increased chloroplastic SOD 1 day after inoculation and thereafter through increased mitochondrial and peroxisomal SOD activity.Elevated symplastic H 2 O 2 concentrations are maintained by declining APOD and later CAT activity.We propose that the elevated concentration of H 2 O 2 diffuses from the symplast to the apoplast where it directly inhibits bacterial metabolism and utilized by POD.The higher POD activity presumably utilizes H 2 O 2 to cross-link cell walls and perhaps produce highly toxic OH . .
Oxidative metabolism in canker susceptible grapefruit and sweet orange
Using the same strain of Asiatic canker, infiltration method, and under the same growing conditions as in kumquat (Kumar et al., 2011c,e), the bacterial population in grapefruit and sweet orange leaves grew to 1 x 10 9 CFU/cm 2 (Kumar et al., 2011b,d), which was 10x that of kumquat (Kumar et al., 2011e).In general, the responses of grapefruit and sweet orange to Xcc were similar.Whereas the Xcc population peaked in kumquat 4 days after inoculation, the population peak occurred 8 days after inoculation in grapefruit (Figs. 1 and 3) and 14 days after inoculation in sweet orange.Chlorosis was evident in grapefruit and sweet orange by the first day after inoculation as in kumquat.However water soaking, which didn't occur until 4 days after inoculation in kumquat, occurred by the second day in grapefruit and sweet orange.Furthermore, swelling of the leaves in the inoculated region was evident starting 6 days after inoculation.Necrosis was evident from 16 to leaf abscission, which occurred a week later than kumquat.
Unlike H 2 O 2 concentrations in kumquat that increased and remained high until Xcc populations declined, H 2 O 2 concentrations in grapefruit and sweet orange leaves demonstrated a biphasic pattern.There was an initial surge in H 2 O 2 concentration in both susceptible genotypes to that found in kumquat except it was only to 1/3 the concentration and the surge only lasted until 4 days after inoculation (Kumar et al., 2011b,d).H 2 O 2 concentrations declined to or below the controls and then surged a second time but only to the same concentrations or to concentrations slightly above the controls from 12-14 days after inoculation.The crash in H 2 O 2 concentration occurred very late in the log phase of bacterial growth, the stage most susceptible to H 2 O 2 (Tondo et al., 2010), which allowed extension of that phase resulting in the higher bacterial populations compared to kumquat.
The disturbance in H 2 O 2 concentration was related to temporal and qualitative changes in enzyme activities related to H 2 O 2 metabolism.Total SOD activity in grapefruit and sweet orange generally followed that of H 2 O 2 concentration with a peak in activity occurring 4 days after inoculation followed by a rapid decline with concentrations similar to or less than the controls for the rest of the infection process (Kumar et al., 2011b,d).The initial increase in total SOD activity was due to a surge in Fe-SOD activity similar to that of kumquat.Three Fe-SOD isoforms were detected in both infected and control leaves of grapefruit, but it was Fe-SOD 2 that contributed most of the Fe-SOD activity observed.Down regulation of Fe-Sod1transcription were observed in Botrytis cinerea infected cultured cells of Pinus pinaster (Azevedo et al., 2008), but whether this gene is involved in Xcc-susceptible citrus genotypes is unknown.
Manganese superoxide dismutase activity surged in a manner similar to kumquat but then crashed to concentrations similar to the controls by 4 days after inoculation (Kumar et al., 2011b,d).Thus the decline in H 2 O 2 concentration in grapefruit and sweet orange was due in part to suppression of Mn-SOD activity.Four Mn-SOD isoforms were observed in grapefruit (Kumar et al., 2011d).Mn-SOD 3 was constitutively active however Mn-SOD 1 and 2 were higher from 2 and 4 days after inoculation but thereafter gradually disappeared.It appears then that the appearance of Mn-SOD 1 and 2 are originally promoted in response to Xcc infection, but response dissipates later in the infection process.A weakly stained Mn-SOD 4 was observed at 10 days after inoculation and appeared to be a last attempt by the host to generate more H 2 O 2 to suppress Xcc or as part of PCD in the infected zone (Vattanaviboon and Mongkolsuk, 1998).
In addition to changes in activities of the various SODs, H 2 O 2 degrading enzymes also demonstrated temporal and qualitative changes in activity (Kumar et al., 2011b,d).Catalase activity increased above the control in grapefruit starting 2 days after inoculation and remained up the control peaking 16 days after inoculation, which is opposite of kumquat where CAT activity was suppressed (Kumar et al., 2011b).Four CAT isoforms were detected in controls and six in Xcc-infected grapefruit, with CAT 4 and 5 novel in the latter plants and the intensity of the CAT 2 and 4 bands very high compared to the controls.Higher expression of CAT 2 mRNA in roots of potato was found during pathogenesis of Corynebacterium sepedonicum NCPPB 2137 and Erwinia cartovora spp.cartovora NCPPB 312 and provide the first evidence that class II CAT isoforms are also pathogen induced (Niebel et al., 1995).Thus the elevated CAT activity in grapefruit partially explains the decline in H 2 O 2 concentrations in grapefruit.
Unlike kumquat where APOD activity was suppressed in Xcc-infected plants, APOD activity in grapefruit increased 4 days after inoculation and remained higher than the controls up to 16 days after inoculation (Kumar et al., 2011b).Like CAT, the higher APOD activity contributed to the lower H 2 O 2 concentrations.
The class III POD activity levels were higher in Xcc-infected grapefruit and sweet orange leaves 1 days after inoculation (Kumar et al., 2011b,d), which was similar to that in kumquat.Three isoforms (POD 1, 2 and 3) were detected in control and infected leaves of both genotypes with higher intensity of all three bands in infected tissues.In a separate study of Xcc infected sweet orange, POD genes were shown to be up-regulated as early as 6 hours after inoculation (Cernadas et al., 2008).More than 70 isoforms of PODs have been identified in plants and it is currently difficult to assign a physiological function to each one due to gene redundancy (Sasaki et al., 2004).Nevertheless, it is interesting that unlike CAT and APOD where there was a differential response in susceptible (grapefruit and sweet orange) and resistant (kumquat) genotypes, POD activity in all three genotypes increased in response to Xcc.
Proposed model of citrus response to canker
A comparison of Xcc population, symptom development, H 2 O 2 , and activities of enzymes involved in H 2 O 2 metabolism between the resistant genotype kumquat and a susceptible genotype such as grapefruit can reveal deficiencies in susceptible genotypes.Although similar concentrations of Xcc were injected in leaves of both genotypes, the population was 10x less in kumquat than grapefruit by 3 days after inoculation and remained substantially lower.Activity of chloroplastic Fe-SOD, an organelle that is presumed to be involved in pathogen sensing and signaling, increased 1 day after inoculation in kumquat but 2 days after inoculation in grapefruit, which indicates a delayed response in the latter genotype.The reduced Xcc population in kumquat compared to grapefruit was due, in part, to lower H 2 O 2 .Although H 2 O 2 increased in both species upon infection, it was only 1/3 the concentration in grapefruit than kumquat at its peak 5 days after inoculation.The sustained H 2 O 2 concentration in kumquat was due to higher and sustained Mn-SOD activity and lower CAT and APOD activities.In grapefruit, however, CAT increased 1 day after inoculation, APOD increased 3 days after inoculation, and Mn-SOD declined 5 days after inoculation.There are reports which showed that Xanthomonas spp.are naturally very resistant to O Watersoaking developed earlier in grapefruit (2 days after inoculation) than kumquat (4 days after inoculation).Water soaking is a characteristic symptom of Xcc infection in citrus that is caused in part by increased uptake of water through capillary action as a consequence of loss of intercellular space between rapidly dividing and enlarging mesophyll cells (Khalaf et al., 2007;Popham et al., 1993).The earlier watersoaking of grapefruit and the higher raised epidermis is indicative of increased cell growth in this genotype, which was reflected in the observed raising of epidermis compared to kumquat.It is interesting that POD activity in both genotypes was elevated upon Xcc infection.Peroxidase serves a dual role of promoting cell enlargement by loosening the cell wall but is also involved in cross-linking of cell wall components during cell maturation, a process that inhibits cell enlargement (Passardi et al., 2004).Which process that occurs would be substrate dependent and would vary temporally and spatially.Such a temporal and spatial variation in POD activity has been shown to occur during cell growth of Arabidopsis thaliana leaves where cell enlargement was promoted early and cell wall stiffening occurred later (Abarca et al., 2001).The changes in CAT, APOD and Mn-SOD that lowered H 2 O 2 concentrations in grapefruit preceded the raised epidermis and thus it is reasonable to assume that the concentrations of H 2 O 2 were necessary to promote cell enlargement in this genotype, whereas the higher concentrations of H 2 O 2 that occurred in kumquat were excessive and involved in suppression of Xcc.Thus, we propose that the lower H 2 O 2 concentrations in grapefruit promoted plant cell growth whereas the higher H 2 O 2 concentrations in kumquat were involved in cross linking of cell wall polymers and possibly the production of OH . .Solutions to solving Xcc in susceptible citrus genotypes such as grapefruit and sweet orange will need to include promoting earlier, higher, and sustained H 2 O 2 concentrations.
The comparative studies of oxidative metabolism in susceptible and resistant genotypes to Xcc have identified deficiencies in susceptible genotypes.Altering their response either through exogenous applications of chemicals that evoke systemic acquired resistance and induced systemic resistance or through genetic modification should be a focus of future research.In particular, stimulation of Mn-SOD activity, which is important for sustained production of H
Fig. 2 .
Fig. 2. Proposed mechanism of oxidative metabolism that promotes disease resistance in kumquat.Changes in enzyme activities and H 2 O 2 concentration taken from Kumar et al. 2011c,e.
Fig. 3 .
Fig. 3. Proposed mechanism of oxidative metabolism in grapefruit that promotes population growth of Xcc.Changes in enzyme activities and H 2 O 2 concentration taken from Kumar et al. 2011b,d.
. Early after infection, elevated concentrations of H 2 O 2 serve as diffusible signals to induce defense genes in adjoining cells with the later elevated concentrations serving in the direct inhibition of pathogens (Alverez et al., 1998; Dat et al., 2000; Lamb and (Sasaki et al., 2004;Quiroga et al., 2000)01) and use ascorbate as a substrate as part of the glutathione-ascorbate cycle(Foyer et al., 2009).Ascorbate peroxidase is ubiquitous throughout the cell and thus is important in catalyzing H 2 O 2 that is produced as a waste product of different metabolic pathways(Mittler, 2002).The importance of APOD in disease resistance has been shown in transgenic tobacco transformed with antisense cAPX (Nicotiana tabacum cv Bel W3) that exhibited PCD accompanied by fragmentation of nuclear DNA after being challenged with Pseudomonas syringae pv.tabaci, Pseudomonas syringae pv.phaseolicola NPS3121 and Pseudomonas syringae pv.syringae(Mittler et al., 1999;Polidoros et al., 2001).The use of guaiacol as a substrate to test peroxidase activity is limited to the Class III peroxidases (POD) that are characterized by secretion into the apoplast and utilize phenolic compounds as substrates to cross-link cell walls during cell maturation (De Gara, 2004;Liszkay et al., 2003;Sasaki et al., 2004).During infection, the class III PODs promote lignification, suberization, cross-linking of cell wall proteins, and phytoalexin synthesis to sicken metabolism and isolate the pathogen(Sasaki et al., 2004;Quiroga et al., 2000).The peroxidative cycle of POD uses H 2 O (Liszkay et al., 2003;Martinez et al., 1998)ading to necrotic lesions upon challenge with Pseudomonas syringae pv.tabaci(Mittler et al., 1999). 2 as an oxidant to convert phenolic compounds to phenoxy radicals that spontaneously combine to form lignin responsible for cell wall stiffening(Liszkay et al., 2003;Martinez et al., 1998).
(Kumar et al., 2011e)es, 1977)l., 2011e;Rusterucci et al., 1996) turn are toxic to plant and bacterial cells and is consistent with PCD as part of the HR to pathogens(Gobel et al., 2003;Kumar et al., 2011e;Rusterucci et al., 1996).It is interesting that using the injection method, kumquat did not display much swelling of the epidermis, which is required for egress of Xcc to the leaf surface.Kumar et al., (2011c,e) concluded that the retention of bacteria in the leaf coupled with early leaf abscission, which occurred from days 10 through 12, is consistent with a disease avoidance mechanism.Xcc interaction is supported by differential expression of related genes(Khalaf et al., 2007).Although Fe-SOD activity initially surged, high concentrations of H 2 O 2 have been shown to deactivate Fe-SOD(Giannopolitis and Ries, 1977), which is consistent with suppression of Fe-SOD activity after the first day(Kumar et al., 2011e).
(Liu et al., 2007;Zurbriggen et al, 2009)hrough SOD activity.Kumar et al. (2011e)showed that total SOD activity demonstrated two peaks during the course of Xcc infection of kumquat with peaks at 1-2 days after inoculation and 6-8 days after inoculation, although the total SOD activity was always higher than the uninfected controls.Analysis of the activity and isoforms of the various SODs were shown to be altered indicating compartmentalization of H 2 O 2 production (Kumar et al, 2011c,e).The first peak in total SOD activity was associated with a rapid increase in Fe-SOD activity to 2x the controls by 1 day after inoculation, but the activity dropped rapidly near or below the controls thereafter.Fe-SOD is compartmentalized in chloroplasts and studies on other plant-pathogen interactions have shown that chloroplasts are an important source of ROS signals that initiate changes in oxidative metabolism in other cellular compartments(Mur et al., 2008;Zurbriggen et al, 2009).Cu-Zn-SOD is also found in the chloroplasts(Alscher et al., 2002), butKumar et al. (2011e)found no activity of this SOD isoform during the kumquat-Xcc interaction.Mitogenactivated protein kinase (MAPK), which respond to external stimuli, are activated in plantpathogen interactions and promote ROS generation in chloroplasts by inhibiting CO 2 assimilation that serves as a sink for ROS generated by light(Liu et al., 2007;Zurbriggen et al, 2009).Evidence that this mechanism functions during kumquat- (Kumar et al., 2011c))u et al., 2010)s the importance of mitochondria in generating ROS to promote PCD(Mur et al, 2008;Yao et al., 2002).Thus, the elevated H 2 O 2 concentration during kumquat-Xcc interaction is promoted by SOD activity, first in the chloroplast and thereafter in the peroxisome and mitochondria.Thus, the sustained production of H 2 O 2 in peroxisomes and mitochondria indicates that these organelles serve as important generators of H 2 O 2 during kumquat-Xcc interactions.The fate of H 2 O 2 in kumquat-Xcc interaction is determined, in part, by enzymes involved in its dismutation.Catalase is considered the major H 2 O 2 scavenging enzyme and is located in peroxisomes of plant cells(Kamada et al., 2003;Hu et al., 2010).During kumquat-Xcc interaction, total CAT activity remained similar to the controls up to 5 days after inoculation but declined starting 6 days after inoculation to almost half of the controls(Kumar et al., 2011c).Interestingly, CAT demonstrated qualitative and temporal changes in isoforms(Kumar et al., 2011c).Plants have been shown to contain three CAT genes that code for three subunits and generate at least six isoforms that are classified into three classes(Hu et al., 2010).Class I CATs are abundant in tissues that contain chloroplasts, Class II CATs are mainly expressed in vascular tissues, and Class III CATs are generally found in young and senescent tissues.In uninfected kumquat leaves, Kumar et al. (2011c) identified 4 CAT isoforms (CAT 1-4) that appeared to be constitutive and therefore belong in Class I and II.(Tondo et al., 2010).Thus, it appears that the reduced plant CAT activity, which occurred during the stationary phase of Xcc population growth, was too late to directly impact the pathogen.Perhaps molecular modification that increasing CAT activity earlier in kumquat would suppress Xcc concentrations further by allowing H 2 O 2 concentrations to increase during the log phase of Xcc growth(Chaouch et al., 2010).CATs are limited to peroxisomes, it appears that this organelle serves an important role in canker resistance by elevating H 2 O 2 concentrations that diffuses to the rest of the cell and thus could become a promising site for resistance enhancement in susceptible citrus by genetic engineering of CAT gene expression or by post-translational modification of CAT proteins(Chaouch et al., 2010).Xcc inoculation to less than half the activity of the controls by 12 days after inoculation(Kumar et al., 2011c).The immediate and increasing decline in APOD activity is an adaptive plant response to help promote elevated H 2 O 2 concentrations throughout the sympast and is the principle enzyme that allowed H 2 O 2 concentrations to increase in infected kumquat.There is evidence that higher H 2 O 2 concentrations inactivate APODs at both the transcriptional and post-transcriptional levels(Zimmermann etal., 2006; Paradiso et al., 2005).Higher H 2 O 2 concentrations rather than O 2 .-in the symplast is interesting because it is a less reactive ROS, which may indicate another role for H 2 O 2 than promoting senescence alone.Xcc are only found in the apoplast and any positive effect of higher H 2 O 2 concentrations would require diffusion out of the symplast.H 2 O 2 in the apoplast would allow it to serve as a substrate for the Class III PODs.During normal metabolism of uninfected plants, H 2 O (Cernadas et al., 2008)assardi et al., 2005)t 4 days after inoculation, and CAT-4 declined starting at 10 days after inoculation, probably due to termination of all metabolic activity because of necrosis.A novel CAT isoform, CAT-5, was expressed 4 days after inoculation, and appears to belong to Class III since senescence as indicated by chlorosis rapidly developed at this time.There was no evidence of CAT-6.The decline in CAT activity coincided with the highest concentrations of H 2 O 2 but during the stationary phase of Xcc population growth(Kumar et al., 2011e).Xcc during the log phase of growth in kumquats is highly susceptible to H 2 O 2 with almost no survival upon exposure to 1 mM H 2 O 2 in comparison to stationary phase populations that can resist up to 30 mM of H 2 O 2(Tondo et al., 2010).H 2 O 2 increased to almost 10 mM (Kumar et al., 2011c,e), which was high enough to restrict Xcc during the log phase but not enough to impact bacterial populations during the stationary phase of growth(Tondo et al., 2010).The Xcc stationary phase populations were able to resist higher external H 2 O 2 concentrations due to high bacteria CAT activity via the expression of four CAT genes (katE, catB, srpA, and katG) reduced CAT expression exhibited necrotic lesions and displayed elevated concentrations of pathogenesis-related proteins in tobacco (Nicotiana tabacum cv.Bel w3;Mittler et al., 1999). 2 is utilized by the Class III PODs to promote loosening of cell walls during cell enlargement and to cross-link cell wall polymers during cell maturation (de Gara, 2004).The Class III PODs are also an adaptive defense mechanism against pathogens since the cross linking of cell wall polymers diminishes their ability to enzymatically digest the cell wall and thus isolates the pathogen in a confined area(Bradley et al., 1992;Passardi et al., 2005).Kumquat POD activity tripled 1 day after inoculation with Xcc and continued to increase to 8 days after inoculation(Kumar et al., 2011c).No canker development occurred beyond the initial infection zone as evidenced by water soaking upon injection indicating isolation of the bacteria consistent with activity of the Class III PODs.No up-regulation of POD has been shown for kumquat, but transcriptional analysis has shown up-regulation of POD genes in sweet orange leaves 2 days after inoculation with Xcc(Cernadas et al., 2008).
2 O 2 , and suppression of CAT and APOD activity to maintain high concentrations of H 2 O 2 in susceptible genotypes should improve resistance to Xcc.Strategies that improve H 2 O 2 metabolism to enhance resistance should provide new cultural management approaches in commercial groves for reducing the economic impact of this disease.Populatin concentrations are shown as the ratio of kumquat and grapefruit y Symptom classification: C= chlorosis, W= watersoaking, E= raised epidermis, N= necrosis x Enzyme classification: SOD= superoxide dismutase and their various forms as indicated by their metal cofactor, CAT= catalase, APOD= ascrobate peroxidase, and POD= the class III peroxidase xThe arrows indicate the ratio in Xcc population between kumquat and grapefruit Fig. 1.Comparison of Xcc population, canker symptoms, H 2 O 2 , and activities of enzymes involved in H 2 O 2 metabolism for kumquat (K) and grapefruit (G) by days after inoculation (dai).Arrows for H 2 O 2 and enzyme activities indicate a comparison of Xcc-infected to uninfected leaves.Data were taken from Kumar et al., 2011b,c,d,e. | 7,645.4 | 2012-05-02T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Structural elastoplastic time-history analysis using solid element
Beam element is often used for simulation of frames, and the two-dimensional element is used for simulation of shear walls and slabs in structural analysis. This kind of method can be used for most of the conventional structures, but for the super-large section of super-tall buildings or the complex geometry structural members, this simplified may produce significant numerical error. In this paper, three-dimensional solid element is used to simulate the special shape members in a framework, and the embedding of model processing, multipoint constraints and other special constraint forms are introduced, which provides a new idea for engineers to use three-dimensional solid element to simulate the building structures. The results of the example show that the simplification of the special members in the structure will not only have a significant impact on the local results of the structure, but also may cause analysis bias on the overall results of the structure.
Introduction
According to the structural design method of "three levels and two stages" in China codes, to ensure that the structure will not collapse when it is even seriously damaged under the action of rare earthquake, it is necessary to check the floor displacement angle under the action of rare earthquake. Especially for those high-rise buildings that are more than 150m high; or under high intensity; or the site conditions are not ideal and the isolation bearings and energy dissipation is adopted [1].
According to the code for design of concrete structures [2], the model of finite element analysis of structures can be assumed as follows: 1) Beam, column, brace can be simplified with one-dimensional element, it is appropriate to use fiber bundle model or plastic hinge model; 2) Walls, slabs and other components can be simplified with two-dimensional elements. Membrane elements, plate elements or shell elements should be adopted; 3) For complex concrete structures, bulk concrete structures, joints of structures or local areas requiring detailed analysis, three-dimensional solid elements should be used.
According to the plate and shell theory [3], when the scale of one direction of the structure is much smaller than that of the other two directions, the shell element model can be used to simplify the calculation. In the actual calculation, the membrane element will be used as required, or the stiffness in the plane is assumed to be infinite while the stiffness outside the plane is assumed to be elastic. The one-dimensional element is a further simplification of the computational model.
Due to the characteristics of the building structure, the calculation model combining twodimensional element and one-dimensional element is usually adopted in the practical analysis. The shear wall and floor of the structure are formulated with shell element, and the frameworks are In recent years, the super high-rise building structure is increasing day by day, the structure form is also becoming more and more complex. For super high-rise buildings, with the increase in height, in order to ensure the bearing capacity of the structure and control the ductility of concrete, for both shear wall and column, the section of the members is getting larger and larger; In addition, beam or shell elements cannot simulate accurately for some special geometric shapes in the structure. In this case, the three-dimensional stress effect of the members is more prominent, therefore, engineers have to use multi-scale finite element method for accurate results [4,5].
In this paper, ABAQUS is used to analyze a frameworks containing a swimming pool, and the method of solid element in the elastoplastic analysis of building structure is studied, and the results are compared with those of general beam and shell element models, so as to provide a new idea for structural engineers to conduct more detailed structural analysis.
Structural model
The frame structure, including with a swimming pool in the upper floors is shown in Fig.1:
Constraints of three-dimensional solid elements
In the original model, beam and column sections are all designed with shape steel, in which the frame column is designed with cross I-beam steel, and the frame beam is embedded with I-beam steel. In the three-dimensional entity, the definition of shape steel is consistent with the original model, and embedded into the concrete. In ABAQUS [6], EMBED command is adopted to achieve embedding (Fig.4). The software automatically realizes the binding of shape steel and related concrete elements, and ignores the bond and friction between each other. Constraints for solid elements In addition, the three-dimensional entity part needs to be connected with the original structure through constraints, as shown in Fig.5.
The upper and lower sections of the three-dimensional solid column are compatible with the deformation of the frame column simulated with beam element by defining the upper and lower surfaces of the three-dimensional solid column. The cross-section of the frame beam of those overlapped with the three-dimensional entity in the original structure is reduced and the ABAQUS/EMBED method is adopted. The load on the slab is transferred to the surrounding beam element at first, since the beam element is embedded into the 3D solid element, so the load is transferred to the beams simulated with the 3D solid element. In ABAQUS, although the coupling constraint between shell element and 3D solid element can be realized, and the deformation compatibility between slabs and 3D solid beams can be realized, it is obviously easier to embed frame beam directly into 3D solid element.
Constitutive relationship of concrete
According to the code for design of concrete structures, the tensile skeleton curve of concrete is defined as follows: The compressed skeleton curve is: Unloading path under compression is: The skeleton curves under compression and tension are shown as shown in Fig.6: Fig.9 Yield surfaces in the deviatoric plane For 3D solid elements, skeleton curves defined by equation (1)(2)(3)(4)(5) were still used [6], where Kc = 0.667 in the yield surface definition in the deviatoric plane, as shown in Fig.9.
Comparison of analysis results
The structural masses of different models and the main periods are listed in Tab Fig.10. The results show that the displacement angle at the top of the structure is greatly different with two models in both x and y directions. To compare the influence of different element on the local structure, the compressive damage of concrete in key parts is analyzed as shown in Fig.12. Damage results show that the concrete column is damaged by compression at the periphery, but the core area still keep enough ductility. The upper of the beam is obviously damaged by compression, while the bottom of the beam is relatively intact. The details of these damages are difficult to be reflected in the beam element model in Fig.12(a). On the other hand, in the model with solid elements, the failure of the remaining beam elements is more minor than the beams shown in figure 12(a).
Summary
In order to improve the calculation efficiency, the finite element calculation model combining beam element and shell element is usually adopted for the building structure, but this simplification may produce large analysis error for those members with large section and complex geometry. In this paper, the special members in 3D solid elements are used to simulate the structure, and the application of special constraints such as embedded and multi-point constraints in the actual calculation is introduced, which provides a new idea for engineers to use 3D solid elements to analyze the building structure. The results of the example analysis also show that the simplification of the special members in the structure will not only affect the local results of the structure, but also bring analysis deviation to the overall results of the structure. | 1,775.2 | 2019-11-08T00:00:00.000 | [
"Engineering"
] |
Gold nanomaterials in the management of lung cancer
Lung cancer (LC) is one of the most deadly cancers worldwide, with very low survival rates, mainly due to poor management, which has barely changed in recent years. Nanomedicines, especially gold nanomaterials, with their unique and size-dependent properties offer a potential solution to many challenges in the field. The versatility afforded by the shape, size, charge and surface chemistry of gold nanostructures allows them to be adapted for many applications in the diagnosis, treatment and imaging of LC. In this review, a survey of the most recent advances in the field is presented with an emphasis on the optical properties of gold nanoscale materials and their use in cancer management. Gold nanoparticle toxicology has also been a focus of interest for many years but the studies have also sometimes arrived at contradictory conclusions. To enable extrapolation and facilitate the development of medicines based on gold nanomaterials, it must be assumed that each design will have its own unique characteristics that require evaluation before translation to the clinic. Advances in the understanding and recognition of the molecular signatures of LC have aided the development of personalised medicines. Tailoring the treatment to each case should, ideally increase the survival outcomes as well as reduce medical costs. This review seeks to present the potential of gold nanomaterials in LC management and to provide a unified view, which will be of interest to those in the field as well as researchers considering entering this highly important area of research.
Introduction
Lung cancer (LC) is one of the four most prevalent cancers worldwide with more than 2.1 million new cases and 1.8 million deaths annually [1]. It is the leading cause of cancer mortality among both men and women, accounting for 25% of all cancer-related deaths. Survival rates vary widely, depending on how far the cancer has spread at the time of diagnosis, which is a key area for improvement [2]. However, the tumour can take years to grow without associated symptoms, making early diagnosis challenging.
Nanotechnology is emerging as one of the most promising ways to improve LC management, mostly through early tumour detection and novel treatment methods capable of selective targeting and delivery of therapeutics. The extraordinary properties of gold nanomaterials promise the potential to deliver remarkable tools for the diagnosis and treatment of LC. For example, their optical properties aid their detection at low concentrations whilst their size and surface modifications can be manipulated in order to achieve significantly improved therapeutic outcomes. This review seeks to provide an insight into how gold nanostructures have become a powerful tool with which to improve current strategies, as well as providing new platforms to overcome the challenges presented by LC.
LC and gold nanomaterials
Biology, attributes and types of LC LC can grow anywhere within the lungs and airways. Smoking leads to the greatest risk of developing LC, being responsible for more than 70% of the cases [2] but it is not the only cause. A recent study highlights how exposure of non-smokers to airborne PM2.5 ( particulate matter with an aerodynamic diameter of 2.5 mm or less) correlates with the incidence of LC [3].
Symptoms of LC develop as the condition progresses, but they are rarely detected in the early stages. Signs include persistent coughs and breathlessness that become worse over time, recurrent chest infections, haemoptysis (coughing up blood), tiredness and unexplained weight loss [2].
LC is classified into two main types. Together with the stage of tumour development, this determines which treatments will be recommended. Non-small cell lung cancer (NSCLC) is the most common form, consisting of over 85% of cases, and refers to any type of epithelial cell LC. It can be further divided into three main classes: adenocarcinoma, squamous cell carcinoma and large-cell carcinoma. Small-cell lung cancer (SCLC) represents 10% of all LCs. Compared with NSCLC, it has a shorter doubling time, higher growth fraction and is associated with earlier development of metastases [4].
Current diagnosis and treatment of LC
LC is currently diagnosed using an X-ray scan, where the tumour appears as a white-grey mass. To enable a definite diagnosis and staging of the cancer, further tests are required, such as bronchoscopies, biopsies and ultrasound scans. Figure 1 [5] shows an X-ray of a lung tumour. Current treatment options include surgery, radiotherapy, chemotherapy and immunotherapy, either alone or in combination, and this decision will mainly depend on the stage of the cancer and the overall health of the patient.
Despite recent advances, the diagnosis and treatment of LC is still highly invasive, slow and very expensive. Although surgery is the most successful approach, this still leads to very low long-term survival rates [6]. There is a pressing need to improve early detection as well as reduce the variability in response to current treatments. Nanotechnology offers the possibility of greater precision and more personalised options [7]. The targeting observed for the delivery of intravenously applied nanosized materials to tumours has been attributed to the enhanced permeability and retention (EPR) effect. This phenomenon occurs due to the leaky nature of tumour vasculature, allowing nanoparticles to extravasate into tumour tissue to a higher degree than in healthy tissue [8,9]. It should be noted that the significance and impact of the EPR effect in humans have been the subject of debate particularly in terms of in vivo applications. Although many types of nanomedicines exist, gold nanomaterials are starting to demonstrate the benefits of their use, leading to improvements in many techniques, as well as providing a platform for novel therapeutic strategies.
Gold nanomaterials in LC
Under most conditions, gold is chemically inert, with a high relative biocompatibility in the human body. Many of its most exciting properties result from reducing its size from the bulk metal to create nanosized materials, which increases their surface area per unit mass, offering a large chemical surface for functional manipulation. In addition to particles resembling spheres, an extensive catalogue of morphologies is available, allowing the function of the nanomaterial to be adapted for a wide variety of applications (Figure 2). For example, rodshaped structures are preferred in optical applications as they are efficient at absorbing energy from a light source (typically near-infrared light) [10]. For the purposes of this review, the term gold nanomaterial will be used to refer to materials with at least one dimension below 100 nm, irrespective of the specific morphology (nanosphere, nanorod, nanostar, etc.).
Due to the size of the nanomaterial, the electrons at the gold surface have the ability to interact with light, resulting in the surface plasmon resonance (SPR) phenomenon. In short, SPR refers to the collective oscillation of conduction electrons in a metal as a result of their excitation by incident light. Influenced mostly by the size and shape of the structure, the SPR is able to modulate the effect of an electromagnetic wave in very localised and focused positions around the material, enabling its utilisation for medical purposes [11] ( Figure 3A). With that objective in mind, it is important to tune the SPR absorption wavelength so that it falls within the nearinfrared (NIR) region of the electromagnetic spectrum (650-1300 nm). This is known as the 'biological window' (or optical/therapeutic window), as light can penetrate more deeply into the tissue due to the absence of absorptions by other biological species in that range [12]. Specific applications related to these optoelectronic characteristics will be discussed in later sections.
The interaction of nanoscale gold materials with biological molecules is mainly determined by surface charge and coating. To prevent aggregation and increase the circulation time in the vasculature, a degree of controllable hydrophobicity is desirable to repel plasma proteins and this is usually achieved through the use of polymer coatings. Polyethylene glycol (PEG) is the most widely used surface coating for gold nanomaterials and a standard strategy to reduce protein corona formation [13] (unwanted adsorption of proteins to the surface of the gold) and to enhance the EPR effect within the tumour [14].
Depending on the desired physicochemical characteristics, different synthetic methods are used to generate the nanostructures, spanning physical, chemical and biological methods. Most approaches generate nanosized gold structures in situ with thiol head groups at the metal surface. This enables the amino groups of biomolecules to bind to the terminal carboxylate tail groups extending away from the surface, employing peptide coupling chemistry. Other useful techniques, such as the alkyne-azide cycloaddition reaction ('click' chemistry) can also be used, allowing different, orthogonal approaches to be employed for surface functionalisation. This versatility allows the addition of almost any functional group (sugars, antibodies, peptides, proteins, DNA strands, etc.) to the surface of gold nanostructures. Importantly, this permits multiple units to be combined in a controlled manner with the only limitation being available space ( Figure 3B). This modular approach to design, together with other favourable properties (compared with other metals), such as the lack of oxide layer and extended intravascular circulation times, make gold nanomaterials ideal candidates for numerous applications in LC management [15][16][17].
Toxicology and organ distribution
The same unique properties that make gold nanomaterials so exciting for medical applications can also result in undesirable effects on healthy tissues. It is important to establish whether the nanomaterials are toxic at the concentrations at which they exhibit therapeutic effects, usually in the range of 1-100 particles per cell, as well as how many particles actually enter each cell [18]. Nanosized gold can be redox-active, with the inherent potential to induce the intracellular formation of reactive oxygen species (ROS) that can correlate with toxicity [19,20]. Reported examples of pulmonary cytotoxicity for gold nanomaterials include an increase in cellular invasion [21], epigenetic modifications [22], organelle reorganisation [23] and changes in protein expression [24,25]. Results in this area are sometimes contradictory and dependent on the characteristics of the materials under investigation. However, there is a degree of consensus regarding the role of positive surface charge in the induction of ROS and pro-inflammatory mediators [26]. A study by Elbakary et al. [27] indicated the occurrence of lung remodelling after intratracheal instillation of gold nanoparticles in healthy adult male rats, exhibiting congestion of blood vessels, alveolar collapse, extravasation of red blood cells and thickening in the alveolar wall, suggesting fibrosis. However, the rats were exposed to 40 or 400 mg/kg of the nanomaterial every day for 14 daysdoses which might be expected to induce pulmonary inflammation independent of any innate particle toxicity. It is also worth noting that some studies claiming toxicity for gold nanoparticles make use of surface chemistries which are known to be toxic [14], or use unfunctionalized citrate-coated particles [18]. This serves to illustrate that the first interactions between gold nanomaterials and the cells of an organism are through their surface units and so these species will strongly influence the overall toxicity of the material.
The ultimate distribution of gold nanomaterials in the body depends mainly on the administration route employed. Conventional intravenous injection is unlikely to result in delivery to the lung. For example, Koyama et al. [28] found that most gold nanoparticles are located in the liver after tail vein injection, and another intravenous study showed no accumulation of particles in the lung, regardless of particle shape [29]. Similarly, there was little evidence of gold nanoparticles in the lung following oral administration, although there was increased non-lung related toxicity, likely due to the removal of functionalised molecules from the gold surface through the action of gut enzymes [30][31][32]. Aerosol inhalation or intratracheal inhalation of gold nanoparticles have proved to be the most efficient and safe method for experimental lung delivery applications [33,34]. Although there are differences between these exposure models in terms of the pattern of nanomaterial deposition, safe, relatively non-toxic levels of exposure to the lung can be achieved [34,35]. Thus, most studies of inhaled gold nanoparticles report no harm or signs of inflammation [14] and result in good maintenance of lung integrity and non-deleterious interactions with its biological secretions and fluids [36,37]. Exposure of mice to intratracheal gold nanoparticles of 20 nm diameter resulted in a predominant deposition in the caudal, lower lobes [38]. Approximately 20% of the particles deposited on the large airway epithelium were rapidly cleared by the mucociliary escalator. The remaining 80% were found in the alveoli and relocated from the epithelium to the interstitium of the lung within 24 h, although most of these particles re-enter the airspaces within macrophages and undergo mucociliary clearance. Only a very small proportion (less than 1%) of the material was translocated to other organs, primarily across alveolar type I epithelial cells [39]. Translocation of a very low percentage of metallic, low-solubility nanoparticles across the air-blood-barrier is seen in most inhalation studies [40,41]. The efficiency of the process is to some extent dose dependent [40], but it is also influenced by both the surface charge, which is higher for negatively charged particles [41], and the integrity of the lung (e.g. healthy vs unhealthy) [42]. Surface coatings such as PEG facilitate diffusion through mucus and surfactants in the airways [43]. Although the excretion route of the gold nanomaterials will also depend on their size and shape, there is some agreement that the use of particles between 10 and 100 nm slows down the activation of the mononuclear phagocyte system, while still being large enough to avoid immediate renal filtration [44][45][46]. It should be noted that in contrast with the study of Elbakary et al. [27], these investigations indicate very little or no toxicity, reflecting the more realistic doses employed.
Diagnosis using gold nanomaterials
High mortality rates in LC are often a consequence of late diagnosis and there is now a major effort to improve early stage tumour detection, which would lead to better prognosis and survival rates. It is key to identify biomarkers which are overexpressed in the early stages of LC. Gold nanostructures can act as sensors in labelling applications due to their ability to interact with visible light and effectively enhance the signal for diagnostic purposes [47].
Immunosensors and serum tumour markers
Serum tumour markers represent an alternative to invasive methods, not only to improve early diagnosis, but also to monitor the progress of therapy in patients with advanced LC. Immunosensing strategies detect specific antigens overexpressed in tumoural cells and are one of the mostly commonly used diagnostic techniques. However, they often suffer from the need for time-consuming processing and require sophisticated manipulation. Figure 4 shows a representation of a biomarker immunosensing strategy based on gold nanoparticles. Gao et al. [48] reported a straightforward colorimetric assay for the multiplex detection of four LC-associated proteins. This is based on a multilayered approach composed of GNPs functionalised with capture antibodies to generate readable optical signals in a microarray. The immunosensor was able to detect protein concentrations down to 1 ng/ml in serum samples in less than an hour and to combine the detection of four biomarkers. This approach improved the sensitivity of LC diagnosis and staging by up to 88% compared with conventional techniques. Electrochemical biosensors offer an approach that is simple to prepare and use, with fast, label-free detection and inexpensive equipment and materials. These systems usually consist of a sensing substrate with good conductivity over a large surface area and the gold nanomaterials amplify the signal on the electrode surfaces by improving electron transfer. Wang et al. [49] fabricated a conductive hydrogel with electrodeposited GNPs for the sensitive, label-free, detection of neuron-specific enolase (NSE), which is a substance that has been detected in patients with certain tumours. The sensor exhibited a wide linear detection range of 0.001 to 200 ng/ml and a limit of detection of 0.26 pg/ml for NSE, which has a cut-off medical value of 5 ng/ml. (1) Gold nanoparticles functionalized with a specific lung cancer antibody attached to the microchip channel surface. (2) Gold nanoparticles are exposed to serum from blood of a patient with lung cancer containing the specific biomarker. Performance comparisons with clinical methods using serum samples reported 93% concordance, indicating substantial clinical promise for such sensors.
Genosensors
Overexpressed tumour-associated genes represent another set of interesting biomarkers. High-throughput techniques in the field of genomics have recently identified numerous promising microRNAs that play an important role in the development of LC [50]. For example, high miRNA-21 levels are believed to be indicative of lung carcinogenesis status [51]. However, detection of nucleic acids is challenging due to their low abundance and short sequence, making the development of signal amplification approaches critical for their sensing. Specially shaped gold nanomaterials that maximise the surface area are particularly attractive as genosensors. As an example, Su and co-workers [52] developed hierarchical, flower-like gold nanostructures assembled with DNA probes for subsequent hybridisation detection of miRNA-21 resulting in sensitivity as low as 1 fM. Gold nanostructures have also been used to improve the detection of circulating tumour DNA [53], long-noncoding RNAs [54] and other types of microRNAs [55][56][57][58] associated with LC.
Novel sensing approaches
The detection of exhaled volatile organic compounds (VOCs) is considered preferable to biopsies due to the much less invasive nature of the technique. Changes in several VOCs found in exhaled breath have been correlated with the early pathological process of LC, specifically certain aldehydes [59]. Peng and co-workers [60] were the first to report an array of sensors based on gold nanomaterials that could rapidly distinguish between LC patients and healthy individuals based on their breath. More recently, Qiao et al. [61] improved the absorption of gaseous molecules onto a self-assembled layer of GNPs for biomolecular detection of aldehydes in exhaled breath samples, achieving a detection limit of 10 ppb, surpassing by far the sensitivity required for the clinic.
As their role in cancer has emerged in recent years, the detection of exosomes released from LC cells has also been explored using gold nanomaterials. Exosomes are small vesicles (30-150 nm) which are secreted by most types of cells. They have essential roles in tumourigenesis and possess specific membrane proteins, as well as containing proteins, nucleic acids and lipids that regulate malignant biological activity in their mother cells [62]. Methods for sensitive and specific identification of tumour-derived exosomes is being explored for NSCLC, not only for cancer detection but also for metabolic staging and predicting treatment outcomes. In recent work, Fan and co-workers [63] utilised a biosensor to determine populations of NSCLC-derived exosomes using antibody-functionalized GNPs. The strategy was able to distinguish successfully between different LC subtypes and was used to evaluate the therapeutic efficacy by measuring the concentration of exosomes in representative human plasma samples.
Imaging
High precision imaging is crucial for the early diagnosis and accurate monitoring of LC. The development of new imaging techniques and improvement of current imaging agents is key to targeting cancer in specific locations in the body. Nanosized gold materials offer several advantages over conventional organic dyes and contrast agents due to their low toxicity, spectroscopic properties and negligible quenching [19]. Their ability to accumulate at the site of interest through the EPR effect (see above) is a significant benefit. Their inherent optical properties obviate the need for an applied electromagnetic field, which is important as applied fields can result in the gold nanomaterials actually causing toxicity and tissue damage [64]. The versatility of gold nanomaterials facilitates the combination of multiple imaging modalities to offer complementary information for the accurate monitoring of LC.
Computed tomography and x-ray scans X-ray and computed tomography (CT) scans are the initial tools used in the diagnosis and staging of LC. A major drawback is the difficulty in distinguishing between tumoural tissue and other lung tissues due to the lack of preferential accumulation of contrast materials in the cancerous cells. The optical properties of gold nanoparticles can enhance the precision of CT images up to 2.5 times due to photoelectric absorption, X-ray attenuation and prolonged circulation time in the lung [65]. The improved contrast also permits the use of a lower radiation dosage to obtain the images, reducing possible side effects. Figure 5 [66] shows an example of this effect. Different doses of gold nanoparticles were used [67] to enhance CT contrast by increasing blood half-life and tumour accumulation in mice-xenografted LC models. An extensive review of the use of GNPs in CT imaging has been published by Ashton and co-workers [68].
Fluorescence microscopy
The implementation of fluorescence imaging techniques for LC as a medical tool is limited and not widely used. This is mainly due to the toxicity associated with the photobleaching of organic probes and the shallow penetration of tissue associated with this technique. Gold nanoparticles possess extinction coefficients up to 1011 M −1 cm −1 , which is several orders of magnitude higher than those found in organic dyes [18]. The overlap between emission spectra of a given fluorophore and the surface plasmon band of the GNP is known as the fluorescence resonance energy transfer (FRET) phenomenon and can be exploited to enhance the imaging capabilities of this modality [69]. Gold nanoclusters capped with glutathione molecules proved to be non-toxic, exhibiting long intracellular lifetimes (100 ns), and preferentially accumulated in cancerous lung cells when imaged by confocal microscopy [70]. Furthermore, gold nanostructures could be an alternative not only in the clinic but also in pre-clinical laboratory studies. Their use in a pre-clinical setting could overcome some of the limitations of organic dyes, such as poor hydrophilicity and photostability, low quantum yield or insufficient stability in biological systems.
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is an effective technique used in cancer diagnosis due to its resolution of abnormalities in soft tissues. However, the information it provides is limited in the absence of contrast agents and it suffers from image distortion when imaging the air-blood-barrier due to the presence of large susceptibility changes [71]. Gadolinium chelates bound to nanomaterials improve their performance as contrast agents, through slowing the rotational motion of the chelates and enabling better interaction with endogenous water molecules [11]. For example, [72] dendrimeric Gd-coated gold nanoparticles deliver excellent soft-tissue resolution and increased relaxivity and retention times in HER-2 positive LC in mice. This performance is far superior to that using non-immobilised gadolinium chelates.
Surface-enhanced Raman scattering
Surface-enhanced Raman scattering (SERS) is a relatively new, ultrasensitive, and non-invasive spectroscopy technique that uses differences in molecular vibration states to distinguish tumoural cells from the surrounding tissue. Enhancements of the SERS signal up to 1015 times have been reported, lowering the detection limit of the technique to the single-molecule level [73]. Gold nanostars (GNS) coated with Nile blue A (a conventional Raman reporter) were investigated [74] in lung adenocarcinoma and alveolar epithelial type-II cells. The SERS spectra could clearly identify important cellular components such as proteins, nucleic acids, lipids and carbohydrates resulting in a characteristic Raman mapping of the tumoural cells. This demonstrated that nanostructures with attached Raman reporter species are able to highlight cellular signatures and provide high spectral specificities on the cellular environment of living cells.
Treatment of LC using gold nanomaterials
Surgical removal is limited to large, accessible tumours, while chemotherapy suffers from severe side effects and the development of drug resistance. Radiotherapy is an alternative but is highly damaging to healthy tissue along the radiation path. Therefore, novel approaches aim to avoid these drawbacks by using a delivery agent that is cleared from the body once its purpose has been achieved, thereby reducing exposure and limiting toxicity. A graphical overview of the main therapeutic strategies for LC involving gold nanomaterials is shown in Figure 6.
Improving the delivery of anticancer drugs
Almost all chemotherapeutics are low-density compounds that diffuse rapidly into the tissue and are widely distributed in the body with a short circulatory half-life and undergo rapid clearance [75]. One of the major challenges faced in the field of chemotherapy is the development of resistance by the tumoural cells associated with the low-specificity of the drugs used. Resistance mechanisms include decreased drug uptake, high drug efflux, activation of detoxifying systems and even DNA repair mechanisms [76]. Nanomaterials can be used to target the cancer site, optimise biodistribution of drugs through their ability to translocate across biological barriers, and encapsulate otherwise unstable chemical compounds. Drug-release performance from nanomaterials depends on the strength of the drug attachment or the encapsulation mechanism.
Controlled release also permits a reduction in the quantity of cargo, while still achieving the same clinical response, but with reduced toxicity and improved therapeutic index. For example, activation of mutations in the tyrosine kinase domain of the EGFR gene is the most common genetic abnormality in NSCLC. However, tyrosine kinase inhibitors such as Afatinib suffer from poor tumour accumulation and systemic side effects that limit their clinical use. Cryer and co-workers [77] designed an Afatinib-GNP system to improve drug biocompatibility, successfully achieving a 3.7-fold potency enhancement when administered to LC cells, while maintaining cell viability in a model of a healthy lung epithelium. Another strategy used to limit severe toxic effects and promote a short half-life in the blood is the encapsulation of drugs in pH-dependent nanostructures. A multifunctional gold nanocluster was used by Guo et al. [78] to encapsulate and deliver controllably the antimetabolite drug Methotrexate into a xenograft mouse tumour model. Significant tumour suppression was observed without overt toxicity after 10 days of treatment. Almost all chemotherapy drugs have been successfully conjugated to gold nanostructures, including Methotrexate [79,80], Temozolomide [81], Cisplatin [82], Bortezomib [83], Docetaxel [84], Gemcitabine [85,86] and Doxorubicin [87,88]. The clinical utility of gold nanomaterials is not limited to the delivery of chemotherapeutics but can also be used to increase sensitisation and vulnerability of the tumour before treatment with the drug. For example, CYT-6091, a gold nanomaterial formulation in phase-1 clinical trials, is being used to induce hyper-permeability in the lung and enhance the efficacy of subsequent chemotherapy [89].
Gene silencing therapy
Antisense DNA and RNA interference have recently emerged as powerful tools to down-regulate gene expression in cancer cells. Small interfering RNAs (siRNAs) can be successfully transfected into cells with controllable strength and duration of the silencing response. These techniques, which work well in the lab, suffer from significant obstacles when applied in the clinic [90]. For example, siRNAs have a short half-life and poor chemical stability, easily dissociate from the vector in a biological environment and, alone, offer weak or no protection against RNases in biofluids. The attachment of nucleic acids to GNPs offers protection against hazardous enzymes and enhances their circulation time in the body as well as permitting tumour targeting when appropriate coatings are used [91]. Many different strategies have used GNPs to knock down transcription factors associated with LC, such as TLR-4/-9 [92], TRAIL [93,94] or IAP-2 [95], amongst others [96,97]. One of the most studied is c-Myc, a key regulator of cell proliferation and apoptosis which, when delivered in association with GNPs, has successfully prolonged survival of lung-tumour bearing mice [98][99][100]. Recently, Kim et al. [101] used a polymeric multilayer structure with a gold particle core to effectively deliver Myc-siRNA in a bioreductive environment. The formulation was intravenously injected into a murine lung carcinoma xenographmodel and found to significantly suppress tumour growth by 83% compared with the group undergoing no treatment and the group treated with siRNA alone.
Immunotherapies
The clinical use of biological therapeutics in LC is only occasional and only for patients where the molecular mechanisms/mutations have been established. Other targeted therapies, such as immunotherapy are gaining acceptance for their effective recognition and killing of tumour cells. Immunotherapy involves the activation of the patient's immune system and offers advantages in the inhibition of metastasis and cancer recurrence. Recent research has revealed that gold nanomaterials can be used to activate immune cells as initiators of the immune response. One of the most promising immunotherapy approaches [102] is to promote the maturation of dendritic cells in the lymph node and induce the response of antigen-specific lymphocytes for local LC therapy. In this work, liposome-coated gold nanocages were subcutaneously injected into tumour-bearing mice and a maximal accumulation of the particles was observed in the regional lymph nodes 12 hours post-injection followed by a significant increase in dendritic cell maturation. Although the tumour could not be cleared through immunotherapy alone, its occurrence was significantly delayed, and the strategy is recognised as having potential application in combined therapy.
Plasmonic therapies
As mentioned briefly above, this section will focus on the use of the plasmonic properties of gold nanomaterials to induce cell damage and kill tumours in LC through the SPR effect [103].
Photothermal therapy
Photothermal therapy (PTT) is the most well-known light-driven therapeutic approach and has many promising applications in cancer therapy. It relies on localised induction of heat close to the tumour tissue. Hyperthermia, defined as tissue temperatures above 43°C, produces irreversible cell damage due to the denaturation of proteins and disruption of cell membranes leading to cellular ablation [104]. After SPR excitation with a specific wavelength of light, a heated electron gas is formed around the gold nanostructure which then cools rapidly (1 ps), exchanging energy with the proximal crystalline network. This leads to energy transfer to the local environment by three processes: electron scattering, electron-phonon coupling and phononphonon reaction [105]. Selectivity is achieved by a combination of several factors: focused directional positioning of the incident radiation, efficient targeting of the nanoparticles and the reduced heat tolerance of cancer cells [104]. Figure 7 [12] shows an image of mice before and after photothermal treatment with GNS.
The El-Sayed group has pioneered modern PTT with gold nanomaterials and one of the most relevant studies from this group demonstrated the feasibility of PTT in squamous cell carcinoma in mice [106]. Using gold nanorods (GNR), the preferred shape for PTT due to their enhanced absorption cross-section and two plasmon bands, it was reported that irradiation led to a dramatic decrease in size (P<0.0001) of deep malignant lung tumours [106].
Even with mild temperature increases, an acceleration in reoxygenation and perfusion of tumour tissues has been reported to increase chemo-and radiotherapy efficacy, making PTT very attractive as a combination therapy [107]. The promise offered by raised temperatures in conjunction with chemotherapeutics is indicated by a phase II clinical trial, which investigated the effects of two well-known anticancer drugs under mild hyperthermia conditions (NCT00178763). The trial showed an enhanced effect of the therapy after sensitisation with The images were taken 3 days after injection with the gold nanomaterial. In the images above, the control group was injected with saline solution. Remission of the tumour in the mice treated with gold nanostars was observed while control mice displayed continued rapid tumour growth. Adapted from reference [12] and reproduced with permission.
poly(lactic-co-glycolic acid) (PLGA) coated Fe 3 O 4 nanoparticles [108]. PTT is the main focus of therapeutic approaches based on gold nanomaterials entering clinical trials for LC. Gold-silica nanoshells (AuroShells®) have entered an open-label, single-centre, single-dose efficacy pilot study for the treatment of primary and metastatic tumours of the lung (NCT01679470). In this study, participants were given an intravenous infusion of the PEG-coated nanomedicine followed by laser irradiation of the tumours via fibre optic bronchoscopy. Primary outcomes of the trial reported a decrease in the tumour volume and the finding that participants did not manifest thermal lesions over a 6-month period.
Photodynamic therapy
Photodynamic therapy (PDT) uses photoactive molecules (photosensitizers) to induce localised formation of cytotoxic ROS as therapeutic agents. This approach does not use the SPR of gold nanomaterials directly. Instead, their interactions with light enhance and mediate the formation of ROS by amplifying and transferring photonic energy to neighbouring photosensitizer molecules. Some unusual shapes of gold nanostructures have been explored in PDT to enhance the localised electric field around the nanomaterial in the lung. For example, Wang et al. [88] used gold bipyramids to enhance the excitation of a commercially available photosensitizer (AlPcS) 16-fold due to the proximal energy transfer from the surface plasmons of the nanoparticle. Further insights and applications of PDT in LC have been reviewed by Allison et al. [109]. A major drawback of PDT is that it relies on the presence of oxygen to generate ROS but the oxygen concentration at solid tumours is usually limited (hypoxic conditions). Consequently, PDT has been overtaken by PTT in the development of cancer treatment.
Theranostics
The combination of therapy and diagnostics has led to the new field of theranostics, which seeks to bring together the simultaneous diagnosis, treatment and real-time imaging in 'all in one' biocompatible nanomaterials. These multifunctional systems are designed for personalised and specific LC management by merging chemical and physical properties into a single material. For example, Liu et al. [87], developed multioperative GNR encapsulating Doxorubicin and gadolinium surface units for combined chemotherapy/PTT/MRI applied to EGFR-positive tumours. The assembly displayed enhanced contrast images and promoted the destruction of tumoural cells after laser activation in vivo. No signs of morphological damage to healthy tissues were reported, indicating reduced side effects of the Doxorubicin drug. Another less common and highly dangerous LC mutation was efficiently treated through the application of a theranostic approach. Anaplastic lymphoma kinase (ALK) tumours represent around 8% of NSCLC. The routinely used ALK-targeted drugs have been shown to induce mutations that lead to drug resistance after 8 months of chemotherapy. Li et al. [110] devised a dual-target siRNA/PTT/chemotherapeutic assembly based on a gold nanoshell system that significantly improved drug delivery to the tumour in vivo. Upon laser irradiation, significant ALK-gene inhibition occurred only at the tumour site. The plasmonic properties of gold nanomaterials have been exploited by combining PDT/PTT with photoacoustic/NIR fluorescence imaging in a single GNS system [111]. The functionalised nanostars were able to target the cancer specifically in lung xenograph tumour-bearing mice. Tumour growth was significantly suppressed (by 93% compared with controls) with negligible damage to secondary organs and no variation in the body weight of the animals. In addition to these effects, the fluorescence intensity of the images was enhanced more than 3-fold. These examples and many more currently being developed hold significant clinical potential for nanogold applications to drive LC treatment with the ultimate aim of personalising detection and treatment for each patient.
Conclusions
LC has one of the highest mortality rates of all cancers with conventional methods requiring invasive procedures and non-invasive alternatives, such as chemotherapy and radiotherapy, which suffer from substantial side effects. There is, therefore, an urgent need for new diagnostic, imaging and treatment tools. Gold nanostructures have emerged as a powerful vehicle that can be used to overcome solubility and stability issues of drugs, reduce mis-targeting of the tumour and overcome biological barriers that previously made treatment difficult. The shape, size and surface chemistry of gold nanomaterials are the key factors that allow these materials to be adapted to enhance the aforementioned applications in medicine. The manipulation of their structure and their surface functionality permits optimised cellular targeting, internalisation and many favourable biomolecular interactions. Nevertheless, a clear understanding of the distribution of the particles inside the body, their interplay with healthy tissue and, above all, their clearance mechanisms, will determine whether they are safe for use in humans. The literature on the toxicology of gold nanomaterials is substantial and covers many diverse aspects. It is clear that every single nanogold design has its own unique properties (e.g. size, shape, charge, surface chemistry, etc.) and so there is a pressing need to unify concepts to enable extrapolation to the clinic and facilitate the development of diagnosis and therapy based on these promising materials. The latest studies in the field identify inhalation as the best way to deliver gold nanomaterials to the lung, indicating high deposition efficiency, good distribution, maintenance of the tissue integrity and no signs of induced inflammation. In recent years, high precision techniques in gene sequencing and -omics technologies have revolutionized our knowledge of LC, helping to understand its molecular signatures. This increase in information on the mechanisms involved in LC has led to better identification of biomarkers that can be translated into more precise and faster diagnostic techniques. Biosensor devices based on gold nanomaterials are promising for research purposes and ultimately for use as non-invasive diagnostic tools for early detection, which is a key factor in the battle against LC. In terms of treatment, gold nanostructures with a high surface area offer greater targeted drug payload, permitting a reduction in the drug intake, which in turn translates into less systemic side effects. As discussed above, gold nanomaterials are also promising therapeutics in their own right, for example, when stimulated by an appropriate light source. Photothermal therapies have demonstrated their effectiveness and safety and are leading the way in the translation of nanogold-based treatment to the clinic. Another promising area is the coupling of photo-responsive agents to the surface of the gold nanostructures to enhance the performance of almost any light-based imaging technique. This is particularly useful in the field of theranostics, where the diagnosis, treatment and imaging of LC could be achieved simultaneously using a single construct.
While this is very encouraging, any consideration of this field must be realistic and accept that most studies are still in their infancy. The majority of the approaches presented in this review, although exciting, are in the initial phases of development and have not yet found their way into the clinic. Unlike other FDA-approved gold nanomedicines, such as Aurolase® PTT therapy for prostate cancer [112], the sensitivity of the lung makes the delivery of therapeutics particularly challenging. As highlighted above, there are many issues that have to be overcome before clinical translation can be achieved. Nevertheless, these early innovative studies indicate a broad range of applications for gold nanomaterials as anticancer weapons in the future, including for LC, either alone or in combination with other techniques. Ultimately, their success will depend on continued investment and research into their development. These aims will benefit from greater consensus and cohesion within the field, drawing together advances in big data, nanomedicine, optics, analytics and supramolecular chemistry.
Summary
• Gold nanomaterials offer a wide variety of attributes that allow them to be adapted to permit or enhance diagnosis, staging and treatment of LC • Greater integration of the concepts would enhance translation to the clinic and facilitate the development of therapies based on gold nanomaterials.
• Emerging fields such as theranostics hold great promise for the development of new goldbased nanomedicines.
• Although the approaches presented here are still in their infancy, they clearly point towards the use of gold nanomaterials as anticancer weapons of the future. | 8,911 | 2020-12-03T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Android Application to Assess Smartphone Accelerometers and Bluetooth for Real-time Control
—Modern smart phones have evolved into sophisticated embedded systems, incorporating hardware and software features that make the devices potentially useful for real-time control operations. An object-oriented Android application was developed to quantify the performance of the smartphone's on-board linear accelerometers and bluetooth wireless module with a view to potentially transmitting accelerometer data wirelessly between bluetooth-enabled devices. A portable bluetooth library was developed which runs the bluetooth functionality of the application as an independent background service. The performance of bluetooth was tested by pinging data between 2 smartphones, measuring round-trip-time and round-trip-time variation (jitter) against variations in data size, transmission distance and sources of interference. The accelerometers were tested for sampling frequency and sampling frequency jitter.
INTRODUCTION
Smartphones evolved from the PDAs of the late 1990s.PDAs were handheld computers essentially used for organising information.They were equipped with small keyboards that the user could utilise to input information.IBM Simon was the first PDA with mobile phone functionality and can be considered the first smartphone.In 2007 Apple Inc. introduced the iPhone which incorporated a large multi-touch screen for direct finger touch input as its main method of interaction.Mass-produced ARM based microprocessor technology delivers high speed multi-processing on an inexpensive, battery powered platform, that only a decade ago, industrial computers would have been envious of.Smartphones have a myriad of onboard sensors, such as motion sensors including accelerometers and gyroscopes, environmental sensors that can measure pressure, light, temperature and humidity as well as position sensors such as orientation sensors, magnetometers and GPS locators.The original purpose of the smartphone was for communication and smartphones have expanded their capabilities here also including bluetooth and infrared.
Smartphones comprise 2 operating systems, a low-level operating system that handles the drivers for the hardware and a higher level user-interfacing operating system.The most common operating system installed worldwide is Google Inc."s Android operating system which is built on top of a Linux kernel.
Android was first established in 2003 with the aim of developing a more user oriented operating system than Symbian and Microsoft.The main advantages of Android over its rivals are its flexibility and upgradability.Android has grown in popularity among consumers and developers alike to the point where an industry survey [1] in 2013 shows that 71% of all mobile development is for the Android operating system.
Since its inception there have been many evolutions of the Android operating system dashboard, from the original "Froyo" through to "KitKat" shipped with new smartphones.Table 1 displays the various distributions of Android dashboards.Developing an application that is compatible with API 10 and higher will guarantee coverage of 99.3% of the Android market, but older APIs are not compatible with newer android features.Some features are only available with more recent APIs due to continuous developments by Google.Developers need to be aware of the features available with each version and the size of the market associated with that version.The bluetooth application on which this paper is based is compatible with all dashboards from Froyo to KitKat.www.ijacsa.thesai.org The Android accelerometers are primarily intended for screen orientation and game play.Android [2] describes the accelerometer sampling periods in terms of data delays in sending sensor readings to the application, ranging from 200,000 microseconds for the "Normal" sampling rate to 0 microsecond delay for the "Fastest" sampling rate.The delay is only a suggested delay and the Android system and other applications can change it.
Bluetooth is a wireless communication protocol invented by Ericsson in 1994 as a wireless replacement for serial port communications between mobile phones and headsets [3,4].Management of the specification passed to the Bluetooth Special Interest Group (BSIG) in 1998 and Bluetooth 1.0 was released in 1999 with a data rate of 721 kbit/s.Bluetooth 2.0 Enhanced Data Rate (EDR) was adopted in 2004, providing a data rate of 1 Mbit/s without EDR and 3 Mbit/s with EDR, and coinciding with the landmark of 3 million product shipments per week mark.In 2009 Bluetooth 3.0 High Speed (HS) was adopted with a data rate of up to 24 Mbit/s and Bluetooth 4.0 Low Energy was adopted in 2012 when annual product shipments exceeded 2 billion.Current specification development is in the area of IP connectivity preparing bluetooth for the Internet of Things revolution.
Bluetooth is a low energy, short-range, short wavelength radio transmission protocol operating within the unlicensed ISM radio frequency band from 2.4 -2.485 GHz.A bluetooth radio can have a range from 1 meter up to 100 metres, depending on the class of device with smartphones typically ranging up to 10 metres.Once connected, a small network called a piconet is dynamically created which allows a master device to connect with up to 7 slave devices [5].Each device can be connected to multiple piconets simultaneously allowing for complex, wide-ranging connectivity.One of the main advantages of bluetooth networks is their ease of set up.Two devices can connect with the push of a button with little configuration required from the user.
Research [6,7] has been carried out into the performance of bluetooth over varying distances, data sizes and sources of interference, but the data sizes tested were large (11 kB -5000 kB) in comparison to the 4 bytes typical for sensor readings.There is an exponential correlation between data size and transmission times with data size having negligible effect for smaller data sizes and a much greater effect at large data sizes.There is also a direct correlation between distance and transmission times for large data sizes with negligible effect of distance for smaller data.This paper specifically assesses the effect of distance and data size when sending small data packets consistent with the transmission of sensor data in real time control applications.
Other conclusions from [6] are that concrete walls and metal barriers reduce the effective range of bluetooth to 3 metres, and that transmitting large data, and direct sunlight reduces the effective range to 4-7 metres.Interestingly, wi-fi had no effect on transmission times for data sizes less than 100 kB.Delay variation was measured in [7] and found to be greater than 16% which is less than the industry recommended maximum of 20%.No difference was found in transmitting between mobile phones, pcs or computers.
Some testing [8] was carried out into streaming of MIDI music files via bluetooth with message lengths of between 1 and 6 bytes, with particular emphasis on comparing delays when master and slave device transmit.The result is that the master transmits with mean delay of 30 mS and standard deviation of 10 mS compared to the slave transmitting with mean delay of 20 mS and standard deviation of 20 mS.The discrepancy is occurring because of the different permissions of the master and slave devices in when they can transmit.
II. ANDROID SOFTWARE STACK
The Android architecture is a software stack comprising applications, a Linux operating system, a runtime environment and various services and libraries.Each layer in the stack and each component in each layer are tightly knitted together to provide a very effective application development and execution platform, as depicted in Fig. 1.
At the bottom of the stack is the Linux 2.6 kernel.Its role is to abstract the hardware into low level software that the higher layers can interact with.It achieves this through hardware device drivers and low level power, process and memory management.Linux is an open source operating system that has been around for decades with widespread application in servers, embedded systems and robotics due to its reliability, efficiency and modularisation of hardware drivers that can be loaded and unloaded while the system is operating.Each application running on Android is executed on an instance of the Dalvik Virtual Machine.Each application is effectively isolated from every other application, from the operating system and from the hardware device drivers.So each application is developed to run on the Dalvik VM rather than a particular hardware platform.The operation of Dalvik VM is similar to Java VM but more efficient in terms of memory usage and processing power requirements and thus more suitable for smart phones.
The application framework provides already developed support tools for the application while it is running and the basic resources required for an application to run, enabling the developer to program at a higher level.Low level tasks are automated such as the construction, management and end-ofwww.ijacsa.thesai.orglife clean-up of an activity, the package file structure of an application, and access to common resources.It contains the graphical views that the user would use for the GUI and content providers to share data between processes.
III. BLUETOOTH PROTOCOL
The industrial, scientific and medical (ISM) bands are ranges of radio frequencies reserved internationally for devices that generate electromagnetic emissions that have the potential to cause interference with telecommunication equipment.Devices that can generate electromagnetic emissions, such as microwave ovens, RF heaters and medical diathermy machines, are required to limit their power emissions in these frequency bands.Telecommunication equipment sensitive to electromagnetic interference should avoid these frequencies.However ISM bands have become popular for short-range radio frequency communications like Bluetooth and Wi-fi LAN networks where the potential for interference is limited by their short broadcasting ranges.
The bluetooth channel is a pseudo-random frequency hopping pattern of 79 channels, each with a bandwidth of 1 MHz, within the 2.4 GHz.ISM band, operating between 2.402 -2.78GHz.The hopping pattern is determined by an algorithm using the address and clock of the master device, to which all devices in the piconet are connected and synchronized.A packet of data will be transmitted on a channel and then each device will switch to the next channel in the frequency hopping pattern before another packet is sent.
Bluetooth incorporates some features to make it more resilient to interference and data loss.Adaptive Frequency Hopping Spectrum is employed to dynamically alter transmission frequencies to avoid frequencies where there is interference.Operating within the ISM band, interference can be expected from other bluetooth devices, IEEE 802.11WLAN and microwave ovens.The frequency hopping pattern determined by the master device"s address and clock can be changed dynamically to avoid frequencies where poor performance due to interference has been detected.
Communication over the channel is serial in nature but parallel communication is achieved by creating time slots to share transmission time, called time-division-duplex (TDD).Each time slot is 625μs in length giving a nominal hopping rate of 1600 hops/sec.Master and slave devices take turns to transmit; the master device transmits in even-numbered time slots and the slave device transmits in odd-numbered time slots.The hop frequency remains constant for the duration of the transmission.When the transmission has completed the channel changes frequency to the next hop frequency in the pattern and the other device transmits.
Large data files will be broken down into packets small enough to be transmitted in one time slot.Each packet consists of a header and payload.The header contains information for channel maintenance and error detection codes and the payload contains user data being transmitted.However packet construction is dynamic where size and composition can be adapted to the conditions.Table II gives a breakdown of the various data packets that can be used.DM1 refers to a small packet designed to be transmitted in one time slot with error detection (FEC) overhead in the header.DH5 refers to a large packet designed to be transmitted in 5 consecutive time slots with no FEC overhead in the header.Bluetooth employs 3 error detection techniques.FEC1/3 (forward error correction) simply repeats each bit in the header of the packet 3 times.Errors in the header can be easily detected if the bits are not in triplicate and corrected by majority vote.FEC2/3 is a shortened type of Hamming code implemented by appending 5 parity bits to the end of each bit word, making it a 15 bit word.It can correct all single errors and detect all double errors.CRC code (cyclic redundancy check) is used on the data payload in the packet to check it"s integrity by referencing the remainder of a polynomial division calculation on the bits in the payload.ARQ (automatic retransmission request) ensures packets will be re-transmitted until an acknowledgement is received from the intended recipient device of a successful, error-free transmission.Error checking overhead can add to transmission delays therefore there is a trade-off between the dual objectives of transmission speed and transmission reliability when using bluetooth.
IV. APPLICATION DESIGN
The application assesses the Android accelerometers and bluetooth module independent of each other.By assessing them independently, their individual contribution to the collective performance when transmitting real-time sensor data wirelessly could be quantified.
Although Android provides the BluetoothAdapter class as an abstraction of the bluetooth hardware, the process of working with bluetooth programmatically is still quite complicated.Because this application was built to assess bluetooth for real world applications, it was decided to build the bluetooth component as a reusable library that could be imported into any future project requiring bluetooth connectivity.Bluetooth operations are effectively simplified to creating an object of BluetoothLibrary and making the correct method calls and interface implementations.
Taking these points into consideration, the application has components; www.ijacsa.thesai.org An activity to measure the performance of the accelerometers An importable BluetoothLibrary to manage the bluetooth connection A set of activities to conduct the bluetooth performance assessment A. Sensor sampling period testing Within the main activity the user can select the sensor testing activity.There are 4 programmable sampling periods for the linear accelerometers.These correspond to the delay in Android sending the sensor data to the application."Fastest" corresponds to no imposed delay in sending the reading to the application, "Game" corresponds to an imposed delay of 20 mS, "UI" corresponds to an imposed delay of 70 mS and "Normal" corresponds to an imposed delay of 200 mS.The user can select the sampling period via the radio buttons, Fig. 2. By pressing on the "Begin Test" button the application begins polling the accelerometers.When the test is complete the mean sampling period and standard deviation of the sampling period are posted to the screen.The user can choose to save the results to a csv file in the smartphone"s primary memory location.Fig. 2. Sensor sampling period testing activity Fig. 3 is a graphical representation of the operation of the test.Sensor data is received from the onSensorChanged callback method on the main UI thread and the current time of that event is recorded.The sensor value itself is unimportant for testing, just its timestamp.The timestamp is sent to a parallel thread where all calculations and screen updates are processed in parallel to avoid blocking the callback method in the main thread.This is particularly important when the sampling period is set to "Fastest" or "Game" where the GUI can hang due to blocking of the sensor callback.The timestamps from the sensor readings are buffered to avoid overwhelming the run method of the thread.Within the thread calculations are performed to determine the sampling period and a running average of the sampling period is displayed on the screen.Also within the thread the sensor timestamp and period are inserted into a SQLite database for temporary storage and can be saved to the phones memory card for further analysis if the user so wishes.Upon completion of the test the standard deviation of the sampling period (jitter) is calculated by iterating through the SQLite database and displayed on the screen.
B. Bluetooth Library
All of the bluetooth related operations that are valid for Android API 8 and that were required for this project are contained within the bluetooth library.This library enables turning bluetooth on and off, making the device discoverable, enabling discovery of other devices, connecting with up to 7 other devices and establishing the input and output streams of a bluetooth connection.Within the bluetooth library is a class, BluetoothLibrary, which contains all of the public methods required for the bluetooth operations.The bluetooth library can be imported into any Android application requiring bluetooth functionality.The structure of the library is shown in Fig. 4.
The connectivity part of the library is contained within its own service.A service in Android is independent of the lifecycle of any activity of the application or the application itself with the advantage that it will allow the established connections to be maintained between activities.Otherwise if the user switches between activities, the activity that established the connection would be paused or destroyed and the connection would be lost.www.ijacsa.thesai.orgThe service contains 3 threads, a listening thread which listens for a connection attempt and blocks on the accept method, a connect thread which tries to connect and blocks on the connect method and the connected thread which sets up the input and output streams and blocks on the read method.A typical server operation will listen for a device on the listening thread before switching to the connected thread when it accepts a connection whereas the typical client operation will try to connect on the connect thread before switching to the connected thread to manage the connection.
If a programmer using the library wishes to do anything bluetooth related, they should create an object of BluetoothLibrary within the activity and then call its public methods.If they wish to perform a connection related operation such as checking if a thread is running they will need to bind to the service by calling the bindToService method in the onResume method and unBindFromService method in the onPause method.The call to bindToService is asynchronous which means that the next line of code will be executed before the activity is bound to the service, potentially crashing the application.The programmer can avoid this problem by implementing the onBindListener interface that provides a callback when the activity is bound to the service.Other interfaces are available for turning bluetooth on/off, discovering new devices and receiving data on the input stream.
C. Bluetooth performance testing
The main activity of the application allows all of the normal bluetooth operations to be performedturning the bluetooth radio on/off, making the device discoverable by other devices, scanning for other devices and initiating a connection.These can be achieved by using the imported bluetooth library.Once two devices are connected, the bluetooth testing activity can begin.Fig. 5 shows the operation of the bluetooth testing activity.One smartphone takes on the role of client and the other smartphone takes on the role of server.The client device sends data to the server device who then returns the data to the client.The client device then calculates the round-trip-time (rtt) for the transmission.This procedure is repeated for 30 seconds until the testing automatically stops.When the transmission of data begins the current timestamp is retrieved from the client system clock.A timestamp is of type long (8 bytes) and it is the timestamp that will be transmitted in the experiment.Depending on the size of data under test, a long array is constructed consisting of the timestamp and filler material, with the exception of the 4 byte data payload size which is tested differently.The data payload is constructed at the beginning of the experiment and sent from client to server and back again in a 30 second loop.Each time the payload is transmitted by the client the current timestamp is retrieved from the client"s system clock and inserted into the start of the long array.
Upon receipt of the payload the client retrieves the timestamp and sends it to a thread to perform some calculations, screen updates and data storage while it resends the packet with the new current timestamp.Within parallel threads the round-trip-time is calculated, a running average of the round-trip-time is updated on the screen and the new data is inserted into the SQLite database for storage.The data rate is calculated from the round-trip-time and packet size.Just like in the sensor sampling period test, the network jitter statistic is determined by iterating through the database and calculating the standard deviation of the round-trip-time.The results screen from this part of the application is illustrated in Fig. 6.Multi-threading is used to avoid blocking the main UI thread and slowing the performance of the transmission.In the case of the 4 byte test, the long array payload containing the timestamp when the payload was sent is not an option.If the user selects the 4 byte test the payload sent is an auto-incrementing integer which identifies the payload.The timestamp for when the 4 byte payload was sent is recorded in a long variable elsewhere.Apart from that difference, the remainder of the 4 byte test is the same as for the rest of the payload sizes.
V. TESTING
The smartphones used in testing were a Samsung S3 running JellyBean and a TCL V860 running GingerBread.
A. Sensor Sampling Period Testing
The procedure for testing the sensor sampling period is straightforward.The user can select one of 4 options for the Android based sampling period.When the test is started a running average of the sampling period is displayed on the screen and upon completion of the test, the standard deviation (jitter) of the sampling period is determined.During tests the running average of the sampling period converges on a value and further testing is not required as the running average will not change significantly from this value.The data stored in the SQLite database is the timestamp of the sensor reading, the period since the previous reading for each sample, a mean sample period and the mean jitter for the experiment as a whole.
B. Bluetooth Testing
The testing of the bluetooth medium was carried out indoors where it is envisaged bluetooth will be used most of the time.When testing bluetooth"s performance the factors examined are distance, data payload size and sources of interference.The maximum distance that class 2 bluetooth devices are operable at is 10 meters.Distance between the 2 devices is varied at intervals of 1, 3, 5, 7, and 9 metres.Payload size is varied at intervals of 4, 8, 64, 256, 1024, and 2048 bytes.4 bytes is the typical size of a sensor reading float value or an integer value and 8 bytes is the size of a long value such as the timestamp.
Testing is carried out in the presence of no interference, an 802.11 wi-fi wireless router and a microwave oven.The testing is carried out for each payload size, at each distance and each source of interference.
A. Sensor sampling period testing
The sensor sampling period was tested as outlined in the previous section.The sampling period for all sensor events was stored in the database and graphed for the 4 programmable sampling periods in Android.The mean sampling period was calculated in real time and the jitter was calculated from the sampling periods in the database.The results of testing the "Normal" (200 mS, 5 Hz) sampling rate are presented in Fig. 7. Fig. 7. Performance of "Normal" sensor sampling rate "Normal" represents the lowest sampling frequency and longest sampling period which is programmable in Android, 5 Hz and 200 mS respectively.It therefore puts the lowest strain on both the hardware and software.There were approximately 90 readings taken and the performance was consistent for all samples.Jitter was measured at 0 mS in this test.Although the jitter performance of the accelerometer was excellent in this test, there is limited use for such low frequency sampling.Possible uses are measuring the movement of large structures, recording seismic activity and tall building reactions to seismic activity and wind conditions.Fig. 8 presents the results of the "UI" sensor sampling rate.UI aims to sample the accelerometers every 70 mS (14.3 Hz) which is almost 3 times faster than the "Normal" sampling rate.From the graph the performance of Android at "UI" is very good and consistent for the most part.There were 176 sensor samples taken and there were 2 inconsistencies at around sample no.65 and no.175.Sample no.65 was polled 85 mS after sample no.64 which is 15 mS late.Also sample no.66 was polled 55 mS after sample no.65 and 140 mS after sample no.64.This result demonstrates that Android schedules to sample the sensors at www.ijacsa.thesai.orgregular time intervals from an initial setpoint, likely to be when the sensorManager is initialized, rather than the previous sensor sample.A similar result is observed at sample no.175.Android cannot maintain regular sampling of the accelerometer at 14.3 Hz possibly caused by software running in the background using the hardware and operating system resources.It can be deduced that the accelerometers are not given priority by Android when under load.The mean sampling period was 70 mS with a jitter statistic was 0.85 mS in this test.
Understanding this result will aid in understanding the results from testing the "Game" and "Fastest" sampling rates in Fig. 8. Android claims that "Game" samples at 20 mS or 50 Hz and that "Fastest" is limited only by the operating system with no imposed delay.The results in Fig. 9 demonstrate the measured performance at these sampling rates.In the case of "Game", the mean sampling period is calculated to be 20.6 mS, very similar to the nominal value, but with a jitter of 7.8 mS.Sampling periods of 50 mS are not uncommon.Similarly, in the case of "Fastest", the mean sampling period is calculated to be 5.06 mS with a jitter of 4 mS.Sampling periods of 20 mS are not uncommon followed by 2 or 3 samples taken within a couple of mS of each other.This sampling pattern is repeated throughout this particular test.Sampling period consistency is very poor at the higher sampling rates.The Android operating system and other applications have priority over system resources and interfere with sensor sampling.
B. Bluetooth Performance
The performance of bluetooth was tested as outlined in section V.The round-trip-times (rtt) from each experiment were saved in a csv file for analysis and graphing along with a calculation of the mean round-trip-time and standard deviation (jitter).
The effect of distance has on rtt is shown in Fig. 10.It can be seen that, for small data payload sizes, distance has no discernible effect on rtt when tested in an environment with no interference.However in the case of the two larger payload sizes, there is a step change of 10-14 mS in rtt performance improvement between 3 and 5 metres.At the application level it is not immediately obvious what low level bluetooth changes occurred to cause this step change.However bluetooth is a dynamic wireless transmission protocol, continuously changing frequencies, packet size and error correction overhead to improve performance.It is possible that in the case of data greater than 1 kb transmitted over distances greater than 3 metres that larger data packets were used, perhaps a 5 slot packet rather than a 3 slot packet used for smaller data sizes.Fig. 10.Effect of distance on round-trip-time Fig. 11 shows the effect of payload size on rtt where it is apparent there is a strong correlation between payload size and rtt.Note the 7 metre curve is partially obscured by the 9 metre curve.Unexpectedly, the rtt performance of bluetooth is poorer at 1 and 3 metres for data payload sizes greater than 1 kb.The relationship appears to be non-linear but there is not a sufficient range of payload sizes to fully model the trend.The overhead induced by increasing payload appears more profound over the shorter distances.Larger data payloads require a greater number of packet transmissions.This effect is non-linear due to the dynamic nature of bluetooth packet assignment.www.ijacsa.thesai.orgOne important result from Fig. 11 is the offset on the rtt axis.Given that payloads of 4 and 8 bytes were tested, which are almost the smallest payload possible (only char is smaller at 2 bytes in size), there appears to be a minimum payload rtt overhead of approximately 10 mS per payload at the Android application level regardless of payload size.At the Android application level it will take the operating system a minimum of 10 mS to process the outgoing data and the incoming data for each payload in a round-trip scenario.The consequence of this result is that, in the case of round trips, the maximum payload frequency is limited to 100 Hz and an estimated 200 Hz in the case of a one-way transmission.This result suggests that Android is more suitable for single large data file transmission rather than multiple small packets.
From calculations of data rate, with payload size of 2048 bytes the data rate is in the region of 30-35 kB/s whereas for the 4 byte payload the data rate is approximately 800 bytes/s.Bluetooth is rated at 2.1 Mb/s and that may be possible when transmitting a single very large data payload.From Android"s perspective most bluetooth users would be using bluetooth for transmitting large data files rather than bursty data of small size.This effect can also be seen in using bluetooth to stream audio where a noticeable latency can be detected.
The effects of wi-fi and microwave interference on rtt and jitter are shown in Fig. 12. IEEE 802.11Wi-Fi and microwave ovens operate within the same ISM band as bluetooth and have been identified as potential sources of interference.Microwave ovens operate at a fixed frequency of 2.45 GHz.IEEE 802.11 operates between 2.4 and 2.5 GHz and uses Direct Sequence Spread Spectrum (DSSS) to avoid interference.
From the graphs, only microwave interference appears to have an effect on rtt and an increasing effect for larger data payload sizes.There is a consistent jitter of 3-4 mS regardless of interference source or no interference which is to be expected from unprotected wireless transmission through the air.
However inconsistencies appear in the graph with wi-fi performing better than the interference free case in the rtt test which is unexpected.Further inconsistencies are apparent in the jitter graph where surprisingly wi-fi and microwave have more consistent round-trip-times than the interference free test.Interference by its nature is non-homogenous and inconsistent.Furthermore the interference free test is free of obvious sources of electromagnetic interference but it is not free of background radiation in the air.Therefore bluetooth"s performance will be inconsistent and unpredictable dependent on the conditions of the environment in which it is operating.The previous results in the interference tests were inconclusive due to the inconsistent nature of interference and bluetooth"s adaptation to the transmitting environment and so further analysis of the data from the microwave oven test was carried out.Fig. 13 shows the effects of microwave interference on rtt and jitter.Rtt is largely unaffected by distance from a microwave source.Previous results in Fig. 10 had shown that distance alone had no effect on rtt.The jitter graph shows that microwave increases jitter when both smartphones are within 1-3 metres of the source.www.ijacsa.thesai.orgThe on-board accelerometers can only be consistently sampled at 14.3 Hz.The maximum mean sampling frequency is 200 Hz but with a standard deviation of 80%.The Android accelerometers are geared towards screen orientation and game play with low system priority given to the on-board sensors.
In terms of utilising Android smartphones" onboard sensor and bluetooth technology for real-time control applications, from the results of testing in this paper, it is estimated that the sensors reliable sampling limit is 14 Hz and that the sensors" output can be transmitted via bluetooth to another device within 5 mS over a range of 10 metres.The effects of distance and interference can be neglected.
Fig. 3 .
Fig. 3. Operation of the sensor testing activity
Fig. 5 .
Fig. 5. Operation of the bluetooth performance test
Fig. 11 .
Fig. 11.Effect of data payload size on round-trip-time
Fig. 12 .
Fig. 12.Effect of interference on jitter at a range of 9 metres
Fig. 13 .
Fig. 13.Effect of microwave interference on round-trip-time and jitter at various distances VII.CONCLUSION This paper has determined that Android bluetooth is geared towards sending single large data files such as music or video or document sharing rather than high frequency small discrete pieces of data such as sensor readings.Bluetooth can transmit sensor readings of up to 64 bytes at 100 Hz round-trip, or 200 Hz one way.Distance has no effect on transmission time for a class 2 device within the 10 metre range.Transmission time increases non-linearly with increasing data size.Bluetooth is resilient to microwave and wi-fi interference.
TABLE I .
ANDROID DASHBOARDS AND DISTRIBUTION LEVELS | 7,373.8 | 2015-01-01T00:00:00.000 | [
"Computer Science"
] |